2026-02-04 00:00:08.628803 | Job console starting 2026-02-04 00:00:08.650186 | Updating git repos 2026-02-04 00:00:08.778125 | Cloning repos into workspace 2026-02-04 00:00:09.176437 | Restoring repo states 2026-02-04 00:00:09.215636 | Merging changes 2026-02-04 00:00:09.215659 | Checking out repos 2026-02-04 00:00:09.722600 | Preparing playbooks 2026-02-04 00:00:10.737576 | Running Ansible setup 2026-02-04 00:00:19.171170 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-04 00:00:20.352748 | 2026-02-04 00:00:20.352882 | PLAY [Base pre] 2026-02-04 00:00:20.403663 | 2026-02-04 00:00:20.403776 | TASK [Setup log path fact] 2026-02-04 00:00:20.452207 | orchestrator | ok 2026-02-04 00:00:20.476440 | 2026-02-04 00:00:20.476561 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-04 00:00:20.519955 | orchestrator | ok 2026-02-04 00:00:20.537334 | 2026-02-04 00:00:20.537437 | TASK [emit-job-header : Print job information] 2026-02-04 00:00:20.595259 | # Job Information 2026-02-04 00:00:20.595392 | Ansible Version: 2.16.14 2026-02-04 00:00:20.595421 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-04 00:00:20.595449 | Pipeline: periodic-midnight 2026-02-04 00:00:20.595469 | Executor: 521e9411259a 2026-02-04 00:00:20.595486 | Triggered by: https://github.com/osism/testbed 2026-02-04 00:00:20.595503 | Event ID: ae64838415194271b89fad81bc239d83 2026-02-04 00:00:20.601529 | 2026-02-04 00:00:20.601623 | LOOP [emit-job-header : Print node information] 2026-02-04 00:00:21.079265 | orchestrator | ok: 2026-02-04 00:00:21.079711 | orchestrator | # Node Information 2026-02-04 00:00:21.079757 | orchestrator | Inventory Hostname: orchestrator 2026-02-04 00:00:21.079912 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-04 00:00:21.079954 | orchestrator | Username: zuul-testbed04 2026-02-04 00:00:21.080086 | orchestrator | Distro: Debian 12.13 2026-02-04 00:00:21.080112 | orchestrator | Provider: static-testbed 2026-02-04 00:00:21.080133 | orchestrator | Region: 2026-02-04 00:00:21.080152 | orchestrator | Label: testbed-orchestrator 2026-02-04 00:00:21.080169 | orchestrator | Product Name: OpenStack Nova 2026-02-04 00:00:21.080186 | orchestrator | Interface IP: 81.163.193.140 2026-02-04 00:00:21.100554 | 2026-02-04 00:00:21.100662 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-04 00:00:22.468057 | orchestrator -> localhost | changed 2026-02-04 00:00:22.474467 | 2026-02-04 00:00:22.474558 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-04 00:00:25.967392 | orchestrator -> localhost | changed 2026-02-04 00:00:25.983272 | 2026-02-04 00:00:25.983368 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-04 00:00:26.742609 | orchestrator -> localhost | ok 2026-02-04 00:00:26.748441 | 2026-02-04 00:00:26.748537 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-04 00:00:26.785856 | orchestrator | ok 2026-02-04 00:00:26.842657 | orchestrator | included: /var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-04 00:00:26.876674 | 2026-02-04 00:00:26.876764 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-04 00:00:30.938562 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-04 00:00:30.938726 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/work/7dc19ffc5a194c77af8a4f9675ea5084_id_rsa 2026-02-04 00:00:30.938758 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/work/7dc19ffc5a194c77af8a4f9675ea5084_id_rsa.pub 2026-02-04 00:00:30.938780 | orchestrator -> localhost | The key fingerprint is: 2026-02-04 00:00:30.938803 | orchestrator -> localhost | SHA256:Phd9w2BCCWvGVwWHWRwOjJUFYVqwFw3b9hqKZSJDMss zuul-build-sshkey 2026-02-04 00:00:30.938822 | orchestrator -> localhost | The key's randomart image is: 2026-02-04 00:00:30.938890 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-04 00:00:30.938911 | orchestrator -> localhost | | ..o*%#*. | 2026-02-04 00:00:30.938930 | orchestrator -> localhost | | . oo*=Bo | 2026-02-04 00:00:30.938947 | orchestrator -> localhost | | o * = = + | 2026-02-04 00:00:30.938964 | orchestrator -> localhost | | . B . = + . | 2026-02-04 00:00:30.938981 | orchestrator -> localhost | | E S o + = .| 2026-02-04 00:00:30.939002 | orchestrator -> localhost | | . o * o + | 2026-02-04 00:00:30.939020 | orchestrator -> localhost | | o o . . | 2026-02-04 00:00:30.939036 | orchestrator -> localhost | | o | 2026-02-04 00:00:30.939053 | orchestrator -> localhost | | | 2026-02-04 00:00:30.939070 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-04 00:00:30.939113 | orchestrator -> localhost | ok: Runtime: 0:00:02.530982 2026-02-04 00:00:30.945050 | 2026-02-04 00:00:30.945137 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-04 00:00:30.992398 | orchestrator | ok 2026-02-04 00:00:31.004524 | orchestrator | included: /var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-04 00:00:31.016140 | 2026-02-04 00:00:31.016238 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-04 00:00:31.049216 | orchestrator | skipping: Conditional result was False 2026-02-04 00:00:31.056383 | 2026-02-04 00:00:31.056485 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-04 00:00:32.006919 | orchestrator | changed 2026-02-04 00:00:32.025211 | 2026-02-04 00:00:32.025309 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-04 00:00:32.319720 | orchestrator | ok 2026-02-04 00:00:32.326127 | 2026-02-04 00:00:32.326217 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-04 00:00:32.816193 | orchestrator | ok 2026-02-04 00:00:32.837658 | 2026-02-04 00:00:32.837757 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-04 00:00:33.348972 | orchestrator | ok 2026-02-04 00:00:33.354001 | 2026-02-04 00:00:33.354079 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-04 00:00:33.387112 | orchestrator | skipping: Conditional result was False 2026-02-04 00:00:33.398793 | 2026-02-04 00:00:33.399811 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-04 00:00:34.271820 | orchestrator -> localhost | changed 2026-02-04 00:00:34.297220 | 2026-02-04 00:00:34.297319 | TASK [add-build-sshkey : Add back temp key] 2026-02-04 00:00:35.207219 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/work/7dc19ffc5a194c77af8a4f9675ea5084_id_rsa (zuul-build-sshkey) 2026-02-04 00:00:35.207407 | orchestrator -> localhost | ok: Runtime: 0:00:00.032912 2026-02-04 00:00:35.213302 | 2026-02-04 00:00:35.213388 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-04 00:00:35.796709 | orchestrator | ok 2026-02-04 00:00:35.801688 | 2026-02-04 00:00:35.801775 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-04 00:00:35.849669 | orchestrator | skipping: Conditional result was False 2026-02-04 00:00:35.971379 | 2026-02-04 00:00:35.971481 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-04 00:00:36.606194 | orchestrator | ok 2026-02-04 00:00:36.620627 | 2026-02-04 00:00:36.620729 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-04 00:00:36.666279 | orchestrator | ok 2026-02-04 00:00:36.682310 | 2026-02-04 00:00:36.682689 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-04 00:00:37.344715 | orchestrator -> localhost | ok 2026-02-04 00:00:37.351638 | 2026-02-04 00:00:37.351726 | TASK [validate-host : Collect information about the host] 2026-02-04 00:00:38.745928 | orchestrator | ok 2026-02-04 00:00:38.769275 | 2026-02-04 00:00:38.769383 | TASK [validate-host : Sanitize hostname] 2026-02-04 00:00:38.886952 | orchestrator | ok 2026-02-04 00:00:38.891658 | 2026-02-04 00:00:38.891743 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-04 00:00:40.535317 | orchestrator -> localhost | changed 2026-02-04 00:00:40.540359 | 2026-02-04 00:00:40.540446 | TASK [validate-host : Collect information about zuul worker] 2026-02-04 00:00:40.971276 | orchestrator | ok 2026-02-04 00:00:40.976268 | 2026-02-04 00:00:40.976352 | TASK [validate-host : Write out all zuul information for each host] 2026-02-04 00:00:42.266461 | orchestrator -> localhost | changed 2026-02-04 00:00:42.275111 | 2026-02-04 00:00:42.275205 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-04 00:00:42.588731 | orchestrator | ok 2026-02-04 00:00:42.594766 | 2026-02-04 00:00:42.599774 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-04 00:01:57.020073 | orchestrator | changed: 2026-02-04 00:01:57.020303 | orchestrator | .d..t...... src/ 2026-02-04 00:01:57.020339 | orchestrator | .d..t...... src/github.com/ 2026-02-04 00:01:57.020364 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-04 00:01:57.020386 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-04 00:01:57.020407 | orchestrator | RedHat.yml 2026-02-04 00:01:57.047584 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-04 00:01:57.047602 | orchestrator | RedHat.yml 2026-02-04 00:01:57.047655 | orchestrator | = 2.2.0"... 2026-02-04 00:02:11.423605 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-04 00:02:11.441647 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-02-04 00:02:11.601736 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-04 00:02:12.286138 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-04 00:02:12.346297 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-04 00:02:12.888968 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-04 00:02:12.947061 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-04 00:02:13.768711 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-04 00:02:13.768860 | orchestrator | 2026-02-04 00:02:13.768870 | orchestrator | Providers are signed by their developers. 2026-02-04 00:02:13.768875 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-04 00:02:13.768884 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-04 00:02:13.768890 | orchestrator | 2026-02-04 00:02:13.768895 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-04 00:02:13.768904 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-04 00:02:13.768908 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-04 00:02:13.768912 | orchestrator | you run "tofu init" in the future. 2026-02-04 00:02:13.769291 | orchestrator | 2026-02-04 00:02:13.769303 | orchestrator | OpenTofu has been successfully initialized! 2026-02-04 00:02:13.769307 | orchestrator | 2026-02-04 00:02:13.769311 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-04 00:02:13.769316 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-04 00:02:13.769320 | orchestrator | should now work. 2026-02-04 00:02:13.769324 | orchestrator | 2026-02-04 00:02:13.769328 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-04 00:02:13.769341 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-04 00:02:13.769346 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-04 00:02:13.928118 | orchestrator | Created and switched to workspace "ci"! 2026-02-04 00:02:13.928238 | orchestrator | 2026-02-04 00:02:13.928253 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-04 00:02:13.928262 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-04 00:02:13.928269 | orchestrator | for this configuration. 2026-02-04 00:02:14.116959 | orchestrator | ci.auto.tfvars 2026-02-04 00:02:14.363837 | orchestrator | default_custom.tf 2026-02-04 00:02:16.056935 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-04 00:02:16.570112 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-04 00:02:16.933524 | orchestrator | 2026-02-04 00:02:16.933590 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-04 00:02:16.933597 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-04 00:02:16.933602 | orchestrator | + create 2026-02-04 00:02:16.933607 | orchestrator | <= read (data resources) 2026-02-04 00:02:16.933612 | orchestrator | 2026-02-04 00:02:16.933616 | orchestrator | OpenTofu will perform the following actions: 2026-02-04 00:02:16.933627 | orchestrator | 2026-02-04 00:02:16.933632 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-04 00:02:16.933636 | orchestrator | # (config refers to values not yet known) 2026-02-04 00:02:16.933640 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-04 00:02:16.933644 | orchestrator | + checksum = (known after apply) 2026-02-04 00:02:16.933648 | orchestrator | + created_at = (known after apply) 2026-02-04 00:02:16.933652 | orchestrator | + file = (known after apply) 2026-02-04 00:02:16.933656 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.933676 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.933681 | orchestrator | + min_disk_gb = (known after apply) 2026-02-04 00:02:16.933685 | orchestrator | + min_ram_mb = (known after apply) 2026-02-04 00:02:16.933689 | orchestrator | + most_recent = true 2026-02-04 00:02:16.933693 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.933697 | orchestrator | + protected = (known after apply) 2026-02-04 00:02:16.933701 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.933708 | orchestrator | + schema = (known after apply) 2026-02-04 00:02:16.933712 | orchestrator | + size_bytes = (known after apply) 2026-02-04 00:02:16.933715 | orchestrator | + tags = (known after apply) 2026-02-04 00:02:16.933719 | orchestrator | + updated_at = (known after apply) 2026-02-04 00:02:16.933723 | orchestrator | } 2026-02-04 00:02:16.933729 | orchestrator | 2026-02-04 00:02:16.933733 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-04 00:02:16.933737 | orchestrator | # (config refers to values not yet known) 2026-02-04 00:02:16.933741 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-04 00:02:16.933745 | orchestrator | + checksum = (known after apply) 2026-02-04 00:02:16.933748 | orchestrator | + created_at = (known after apply) 2026-02-04 00:02:16.933752 | orchestrator | + file = (known after apply) 2026-02-04 00:02:16.933756 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.933760 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.933763 | orchestrator | + min_disk_gb = (known after apply) 2026-02-04 00:02:16.933767 | orchestrator | + min_ram_mb = (known after apply) 2026-02-04 00:02:16.933771 | orchestrator | + most_recent = true 2026-02-04 00:02:16.933775 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.933779 | orchestrator | + protected = (known after apply) 2026-02-04 00:02:16.933782 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.933786 | orchestrator | + schema = (known after apply) 2026-02-04 00:02:16.933790 | orchestrator | + size_bytes = (known after apply) 2026-02-04 00:02:16.933794 | orchestrator | + tags = (known after apply) 2026-02-04 00:02:16.933797 | orchestrator | + updated_at = (known after apply) 2026-02-04 00:02:16.933801 | orchestrator | } 2026-02-04 00:02:16.933806 | orchestrator | 2026-02-04 00:02:16.933810 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-04 00:02:16.933814 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-04 00:02:16.933818 | orchestrator | + content = (known after apply) 2026-02-04 00:02:16.933823 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.933826 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.933830 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.933834 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.933838 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.933841 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.933845 | orchestrator | + directory_permission = "0777" 2026-02-04 00:02:16.933849 | orchestrator | + file_permission = "0644" 2026-02-04 00:02:16.933853 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-04 00:02:16.933857 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.933860 | orchestrator | } 2026-02-04 00:02:16.933866 | orchestrator | 2026-02-04 00:02:16.933869 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-04 00:02:16.933873 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-04 00:02:16.933877 | orchestrator | + content = (known after apply) 2026-02-04 00:02:16.933881 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.933884 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.933888 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.933892 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.933896 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.933905 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.933909 | orchestrator | + directory_permission = "0777" 2026-02-04 00:02:16.933912 | orchestrator | + file_permission = "0644" 2026-02-04 00:02:16.933920 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-04 00:02:16.933924 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.933928 | orchestrator | } 2026-02-04 00:02:16.933933 | orchestrator | 2026-02-04 00:02:16.933937 | orchestrator | # local_file.inventory will be created 2026-02-04 00:02:16.933941 | orchestrator | + resource "local_file" "inventory" { 2026-02-04 00:02:16.933945 | orchestrator | + content = (known after apply) 2026-02-04 00:02:16.933948 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.933952 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.933956 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.933959 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.933963 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.933967 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.933971 | orchestrator | + directory_permission = "0777" 2026-02-04 00:02:16.933975 | orchestrator | + file_permission = "0644" 2026-02-04 00:02:16.933978 | orchestrator | + filename = "inventory.ci" 2026-02-04 00:02:16.933982 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.933986 | orchestrator | } 2026-02-04 00:02:16.934036 | orchestrator | 2026-02-04 00:02:16.937269 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-04 00:02:16.937299 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-04 00:02:16.937303 | orchestrator | + content = (sensitive value) 2026-02-04 00:02:16.937307 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 00:02:16.937311 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 00:02:16.937315 | orchestrator | + content_md5 = (known after apply) 2026-02-04 00:02:16.937319 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 00:02:16.937323 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 00:02:16.937327 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 00:02:16.937332 | orchestrator | + directory_permission = "0700" 2026-02-04 00:02:16.937337 | orchestrator | + file_permission = "0600" 2026-02-04 00:02:16.937342 | orchestrator | + filename = ".id_rsa.ci" 2026-02-04 00:02:16.937345 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.937350 | orchestrator | } 2026-02-04 00:02:16.937357 | orchestrator | 2026-02-04 00:02:16.937361 | orchestrator | # null_resource.node_semaphore will be created 2026-02-04 00:02:16.937365 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-04 00:02:16.937369 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.937373 | orchestrator | } 2026-02-04 00:02:16.946192 | orchestrator | 2026-02-04 00:02:16.946233 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-04 00:02:16.946240 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-04 00:02:16.946244 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946249 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946253 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946258 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946262 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946266 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-04 00:02:16.946269 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946274 | orchestrator | + size = 80 2026-02-04 00:02:16.946278 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946282 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946286 | orchestrator | } 2026-02-04 00:02:16.946290 | orchestrator | 2026-02-04 00:02:16.946295 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-04 00:02:16.946299 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.946303 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946307 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946311 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946329 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946334 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946338 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-04 00:02:16.946342 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946346 | orchestrator | + size = 80 2026-02-04 00:02:16.946350 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946354 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946358 | orchestrator | } 2026-02-04 00:02:16.946362 | orchestrator | 2026-02-04 00:02:16.946366 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-04 00:02:16.946370 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.946374 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946379 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946383 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946387 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946391 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946395 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-04 00:02:16.946399 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946403 | orchestrator | + size = 80 2026-02-04 00:02:16.946407 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946410 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946414 | orchestrator | } 2026-02-04 00:02:16.946419 | orchestrator | 2026-02-04 00:02:16.946422 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-04 00:02:16.946426 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.946431 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946434 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946438 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946442 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946446 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946450 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-04 00:02:16.946454 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946458 | orchestrator | + size = 80 2026-02-04 00:02:16.946467 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946471 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946475 | orchestrator | } 2026-02-04 00:02:16.946479 | orchestrator | 2026-02-04 00:02:16.946483 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-04 00:02:16.946500 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.946504 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946508 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946512 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946516 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946520 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946524 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-04 00:02:16.946528 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946532 | orchestrator | + size = 80 2026-02-04 00:02:16.946536 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946540 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946544 | orchestrator | } 2026-02-04 00:02:16.946548 | orchestrator | 2026-02-04 00:02:16.946552 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-04 00:02:16.946556 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.946561 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946565 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946569 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946579 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946583 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946587 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-04 00:02:16.946591 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946595 | orchestrator | + size = 80 2026-02-04 00:02:16.946599 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946603 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946607 | orchestrator | } 2026-02-04 00:02:16.946611 | orchestrator | 2026-02-04 00:02:16.946615 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-04 00:02:16.946619 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 00:02:16.946623 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946627 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946631 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946635 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.946639 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946652 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-04 00:02:16.946656 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946661 | orchestrator | + size = 80 2026-02-04 00:02:16.946665 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946669 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946673 | orchestrator | } 2026-02-04 00:02:16.946677 | orchestrator | 2026-02-04 00:02:16.946681 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-04 00:02:16.946687 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.946691 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946696 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946699 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946703 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946708 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-04 00:02:16.946712 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946716 | orchestrator | + size = 20 2026-02-04 00:02:16.946720 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946724 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946729 | orchestrator | } 2026-02-04 00:02:16.946733 | orchestrator | 2026-02-04 00:02:16.946737 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-04 00:02:16.946741 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.946745 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946749 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946753 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946757 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946761 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-04 00:02:16.946765 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946769 | orchestrator | + size = 20 2026-02-04 00:02:16.946773 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946777 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946781 | orchestrator | } 2026-02-04 00:02:16.946784 | orchestrator | 2026-02-04 00:02:16.946788 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-04 00:02:16.946792 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.946797 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946800 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946804 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946808 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946813 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-04 00:02:16.946816 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946824 | orchestrator | + size = 20 2026-02-04 00:02:16.946828 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946833 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946836 | orchestrator | } 2026-02-04 00:02:16.946841 | orchestrator | 2026-02-04 00:02:16.946845 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-04 00:02:16.946849 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.946852 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946856 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946860 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946867 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946870 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-04 00:02:16.946874 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946878 | orchestrator | + size = 20 2026-02-04 00:02:16.946882 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946886 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946890 | orchestrator | } 2026-02-04 00:02:16.946894 | orchestrator | 2026-02-04 00:02:16.946898 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-04 00:02:16.946902 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.946906 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946910 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946914 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946918 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946922 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-04 00:02:16.946926 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946930 | orchestrator | + size = 20 2026-02-04 00:02:16.946934 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946938 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946942 | orchestrator | } 2026-02-04 00:02:16.946946 | orchestrator | 2026-02-04 00:02:16.946950 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-04 00:02:16.946954 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.946958 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.946962 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.946966 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.946970 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.946974 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-04 00:02:16.946978 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.946983 | orchestrator | + size = 20 2026-02-04 00:02:16.946987 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.946991 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.946995 | orchestrator | } 2026-02-04 00:02:16.946999 | orchestrator | 2026-02-04 00:02:16.947003 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-04 00:02:16.947007 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.947011 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.947015 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947019 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947023 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.947027 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-04 00:02:16.947031 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947035 | orchestrator | + size = 20 2026-02-04 00:02:16.947039 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.947043 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.947047 | orchestrator | } 2026-02-04 00:02:16.947051 | orchestrator | 2026-02-04 00:02:16.947058 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-04 00:02:16.947062 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.947070 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.947074 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947078 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947082 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.947086 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-04 00:02:16.947090 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947094 | orchestrator | + size = 20 2026-02-04 00:02:16.947098 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.947102 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.947106 | orchestrator | } 2026-02-04 00:02:16.947110 | orchestrator | 2026-02-04 00:02:16.947114 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-04 00:02:16.947118 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 00:02:16.947122 | orchestrator | + attachment = (known after apply) 2026-02-04 00:02:16.947127 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947131 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947135 | orchestrator | + metadata = (known after apply) 2026-02-04 00:02:16.947139 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-04 00:02:16.947143 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947147 | orchestrator | + size = 20 2026-02-04 00:02:16.947151 | orchestrator | + volume_retype_policy = "never" 2026-02-04 00:02:16.947155 | orchestrator | + volume_type = "ssd" 2026-02-04 00:02:16.947159 | orchestrator | } 2026-02-04 00:02:16.947163 | orchestrator | 2026-02-04 00:02:16.947196 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-04 00:02:16.947201 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-04 00:02:16.947205 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.947209 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.947213 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.947217 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.947221 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947225 | orchestrator | + config_drive = true 2026-02-04 00:02:16.947232 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.947236 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.947241 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-04 00:02:16.947245 | orchestrator | + force_delete = false 2026-02-04 00:02:16.947249 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.947253 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947257 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.947261 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.947265 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.947269 | orchestrator | + name = "testbed-manager" 2026-02-04 00:02:16.947273 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.947277 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947280 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.947284 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.947288 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.947292 | orchestrator | + user_data = (sensitive value) 2026-02-04 00:02:16.947296 | orchestrator | 2026-02-04 00:02:16.947300 | orchestrator | + block_device { 2026-02-04 00:02:16.947304 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.947308 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.947312 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.947316 | orchestrator | + multiattach = false 2026-02-04 00:02:16.947319 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.947324 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947331 | orchestrator | } 2026-02-04 00:02:16.947336 | orchestrator | 2026-02-04 00:02:16.947340 | orchestrator | + network { 2026-02-04 00:02:16.947344 | orchestrator | + access_network = false 2026-02-04 00:02:16.947348 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.947352 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.947356 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.947360 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.947365 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.947369 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947373 | orchestrator | } 2026-02-04 00:02:16.947377 | orchestrator | } 2026-02-04 00:02:16.947381 | orchestrator | 2026-02-04 00:02:16.947386 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-04 00:02:16.947389 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.947394 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.947398 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.947402 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.947406 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.947410 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947414 | orchestrator | + config_drive = true 2026-02-04 00:02:16.947418 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.947422 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.947426 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.947430 | orchestrator | + force_delete = false 2026-02-04 00:02:16.947434 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.947439 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947443 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.947446 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.947451 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.947455 | orchestrator | + name = "testbed-node-0" 2026-02-04 00:02:16.947459 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.947463 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947467 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.947471 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.947475 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.947491 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.947495 | orchestrator | 2026-02-04 00:02:16.947499 | orchestrator | + block_device { 2026-02-04 00:02:16.947503 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.947511 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.947515 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.947519 | orchestrator | + multiattach = false 2026-02-04 00:02:16.947523 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.947526 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947530 | orchestrator | } 2026-02-04 00:02:16.947535 | orchestrator | 2026-02-04 00:02:16.947539 | orchestrator | + network { 2026-02-04 00:02:16.947543 | orchestrator | + access_network = false 2026-02-04 00:02:16.947548 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.947551 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.947555 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.947559 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.947564 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.947568 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947572 | orchestrator | } 2026-02-04 00:02:16.947576 | orchestrator | } 2026-02-04 00:02:16.947580 | orchestrator | 2026-02-04 00:02:16.947584 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-04 00:02:16.947588 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.947592 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.947600 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.947604 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.947608 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.947612 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947616 | orchestrator | + config_drive = true 2026-02-04 00:02:16.947620 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.947624 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.947628 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.947632 | orchestrator | + force_delete = false 2026-02-04 00:02:16.947636 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.947640 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947645 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.947649 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.947653 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.947657 | orchestrator | + name = "testbed-node-1" 2026-02-04 00:02:16.947661 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.947665 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947670 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.947674 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.947678 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.947684 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.947689 | orchestrator | 2026-02-04 00:02:16.947693 | orchestrator | + block_device { 2026-02-04 00:02:16.947697 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.947701 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.947705 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.947709 | orchestrator | + multiattach = false 2026-02-04 00:02:16.947713 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.947717 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947721 | orchestrator | } 2026-02-04 00:02:16.947725 | orchestrator | 2026-02-04 00:02:16.947730 | orchestrator | + network { 2026-02-04 00:02:16.947734 | orchestrator | + access_network = false 2026-02-04 00:02:16.947738 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.947742 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.947746 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.947750 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.947754 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.947758 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947763 | orchestrator | } 2026-02-04 00:02:16.947766 | orchestrator | } 2026-02-04 00:02:16.947771 | orchestrator | 2026-02-04 00:02:16.947775 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-04 00:02:16.947779 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.947783 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.947787 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.947791 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.947795 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.947799 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947802 | orchestrator | + config_drive = true 2026-02-04 00:02:16.947807 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.947811 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.947815 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.947819 | orchestrator | + force_delete = false 2026-02-04 00:02:16.947823 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.947827 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.947831 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.947838 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.947842 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.947846 | orchestrator | + name = "testbed-node-2" 2026-02-04 00:02:16.947850 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.947854 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.947858 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.947862 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.947866 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.947871 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.947875 | orchestrator | 2026-02-04 00:02:16.947879 | orchestrator | + block_device { 2026-02-04 00:02:16.947883 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.947887 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.947891 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.947895 | orchestrator | + multiattach = false 2026-02-04 00:02:16.947899 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.947903 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947907 | orchestrator | } 2026-02-04 00:02:16.947911 | orchestrator | 2026-02-04 00:02:16.947915 | orchestrator | + network { 2026-02-04 00:02:16.947919 | orchestrator | + access_network = false 2026-02-04 00:02:16.947923 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.947927 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.947931 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.947938 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.947942 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.947946 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.947950 | orchestrator | } 2026-02-04 00:02:16.947954 | orchestrator | } 2026-02-04 00:02:16.947958 | orchestrator | 2026-02-04 00:02:16.947965 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-04 00:02:16.947969 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.947973 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.947977 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.947981 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.947985 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.947989 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.947992 | orchestrator | + config_drive = true 2026-02-04 00:02:16.947996 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.948000 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.948004 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.948008 | orchestrator | + force_delete = false 2026-02-04 00:02:16.948012 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.948016 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948020 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.948024 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.948028 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.948032 | orchestrator | + name = "testbed-node-3" 2026-02-04 00:02:16.948036 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.948040 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948044 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.948048 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.948052 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.948056 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.948061 | orchestrator | 2026-02-04 00:02:16.948065 | orchestrator | + block_device { 2026-02-04 00:02:16.948068 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.948072 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.948077 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.948084 | orchestrator | + multiattach = false 2026-02-04 00:02:16.948088 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.948092 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.948096 | orchestrator | } 2026-02-04 00:02:16.948100 | orchestrator | 2026-02-04 00:02:16.948104 | orchestrator | + network { 2026-02-04 00:02:16.948109 | orchestrator | + access_network = false 2026-02-04 00:02:16.948113 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.948117 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.948121 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.948125 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.948129 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.948133 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.948137 | orchestrator | } 2026-02-04 00:02:16.948141 | orchestrator | } 2026-02-04 00:02:16.948145 | orchestrator | 2026-02-04 00:02:16.948149 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-04 00:02:16.948153 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.948158 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.948162 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.948176 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.948180 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.948184 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.948188 | orchestrator | + config_drive = true 2026-02-04 00:02:16.948192 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.948196 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.948200 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.948204 | orchestrator | + force_delete = false 2026-02-04 00:02:16.948208 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.948212 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948216 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.948220 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.948224 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.948228 | orchestrator | + name = "testbed-node-4" 2026-02-04 00:02:16.948232 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.948236 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948240 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.948244 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.948248 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.948252 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.948256 | orchestrator | 2026-02-04 00:02:16.948260 | orchestrator | + block_device { 2026-02-04 00:02:16.948265 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.948269 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.948272 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.948276 | orchestrator | + multiattach = false 2026-02-04 00:02:16.948281 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.948285 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.948288 | orchestrator | } 2026-02-04 00:02:16.948292 | orchestrator | 2026-02-04 00:02:16.948297 | orchestrator | + network { 2026-02-04 00:02:16.948301 | orchestrator | + access_network = false 2026-02-04 00:02:16.948305 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.948309 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.948313 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.948317 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.948321 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.948325 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.948329 | orchestrator | } 2026-02-04 00:02:16.948333 | orchestrator | } 2026-02-04 00:02:16.948340 | orchestrator | 2026-02-04 00:02:16.948344 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-04 00:02:16.948348 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 00:02:16.948352 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 00:02:16.948356 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 00:02:16.948360 | orchestrator | + all_metadata = (known after apply) 2026-02-04 00:02:16.948367 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.948371 | orchestrator | + availability_zone = "nova" 2026-02-04 00:02:16.948375 | orchestrator | + config_drive = true 2026-02-04 00:02:16.948379 | orchestrator | + created = (known after apply) 2026-02-04 00:02:16.948383 | orchestrator | + flavor_id = (known after apply) 2026-02-04 00:02:16.948387 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 00:02:16.948391 | orchestrator | + force_delete = false 2026-02-04 00:02:16.948395 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 00:02:16.948399 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948403 | orchestrator | + image_id = (known after apply) 2026-02-04 00:02:16.948407 | orchestrator | + image_name = (known after apply) 2026-02-04 00:02:16.948411 | orchestrator | + key_pair = "testbed" 2026-02-04 00:02:16.948415 | orchestrator | + name = "testbed-node-5" 2026-02-04 00:02:16.948420 | orchestrator | + power_state = "active" 2026-02-04 00:02:16.948424 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948428 | orchestrator | + security_groups = (known after apply) 2026-02-04 00:02:16.948432 | orchestrator | + stop_before_destroy = false 2026-02-04 00:02:16.948436 | orchestrator | + updated = (known after apply) 2026-02-04 00:02:16.948440 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 00:02:16.948444 | orchestrator | 2026-02-04 00:02:16.948447 | orchestrator | + block_device { 2026-02-04 00:02:16.948452 | orchestrator | + boot_index = 0 2026-02-04 00:02:16.948456 | orchestrator | + delete_on_termination = false 2026-02-04 00:02:16.948460 | orchestrator | + destination_type = "volume" 2026-02-04 00:02:16.948463 | orchestrator | + multiattach = false 2026-02-04 00:02:16.948467 | orchestrator | + source_type = "volume" 2026-02-04 00:02:16.948472 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.948476 | orchestrator | } 2026-02-04 00:02:16.948480 | orchestrator | 2026-02-04 00:02:16.948484 | orchestrator | + network { 2026-02-04 00:02:16.948488 | orchestrator | + access_network = false 2026-02-04 00:02:16.948492 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 00:02:16.948496 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 00:02:16.948500 | orchestrator | + mac = (known after apply) 2026-02-04 00:02:16.948504 | orchestrator | + name = (known after apply) 2026-02-04 00:02:16.948508 | orchestrator | + port = (known after apply) 2026-02-04 00:02:16.948512 | orchestrator | + uuid = (known after apply) 2026-02-04 00:02:16.948516 | orchestrator | } 2026-02-04 00:02:16.948520 | orchestrator | } 2026-02-04 00:02:16.948524 | orchestrator | 2026-02-04 00:02:16.948528 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-04 00:02:16.948532 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-04 00:02:16.948536 | orchestrator | + fingerprint = (known after apply) 2026-02-04 00:02:16.948541 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948544 | orchestrator | + name = "testbed" 2026-02-04 00:02:16.948548 | orchestrator | + private_key = (sensitive value) 2026-02-04 00:02:16.948553 | orchestrator | + public_key = (known after apply) 2026-02-04 00:02:16.948557 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948560 | orchestrator | + user_id = (known after apply) 2026-02-04 00:02:16.948565 | orchestrator | } 2026-02-04 00:02:16.948569 | orchestrator | 2026-02-04 00:02:16.948573 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-04 00:02:16.948577 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948587 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948591 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948595 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948599 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948606 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948611 | orchestrator | } 2026-02-04 00:02:16.948615 | orchestrator | 2026-02-04 00:02:16.948619 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-04 00:02:16.948623 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948627 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948631 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948635 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948639 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948644 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948647 | orchestrator | } 2026-02-04 00:02:16.948652 | orchestrator | 2026-02-04 00:02:16.948656 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-04 00:02:16.948660 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948664 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948668 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948672 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948677 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948680 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948685 | orchestrator | } 2026-02-04 00:02:16.948689 | orchestrator | 2026-02-04 00:02:16.948693 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-04 00:02:16.948698 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948702 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948706 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948710 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948714 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948718 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948722 | orchestrator | } 2026-02-04 00:02:16.948726 | orchestrator | 2026-02-04 00:02:16.948731 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-04 00:02:16.948735 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948739 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948743 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948747 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948751 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948755 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948759 | orchestrator | } 2026-02-04 00:02:16.948764 | orchestrator | 2026-02-04 00:02:16.948768 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-04 00:02:16.948772 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948776 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948780 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948784 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948791 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948795 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948800 | orchestrator | } 2026-02-04 00:02:16.948804 | orchestrator | 2026-02-04 00:02:16.948808 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-04 00:02:16.948812 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948816 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948820 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948824 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948828 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948835 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948839 | orchestrator | } 2026-02-04 00:02:16.948843 | orchestrator | 2026-02-04 00:02:16.948848 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-04 00:02:16.948852 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948856 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948860 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948864 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948868 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948872 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948877 | orchestrator | } 2026-02-04 00:02:16.948880 | orchestrator | 2026-02-04 00:02:16.948884 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-04 00:02:16.948889 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 00:02:16.948893 | orchestrator | + device = (known after apply) 2026-02-04 00:02:16.948897 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948901 | orchestrator | + instance_id = (known after apply) 2026-02-04 00:02:16.948905 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948909 | orchestrator | + volume_id = (known after apply) 2026-02-04 00:02:16.948913 | orchestrator | } 2026-02-04 00:02:16.948917 | orchestrator | 2026-02-04 00:02:16.948921 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-04 00:02:16.948925 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-04 00:02:16.948929 | orchestrator | + fixed_ip = (known after apply) 2026-02-04 00:02:16.948933 | orchestrator | + floating_ip = (known after apply) 2026-02-04 00:02:16.948937 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948941 | orchestrator | + port_id = (known after apply) 2026-02-04 00:02:16.948945 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.948949 | orchestrator | } 2026-02-04 00:02:16.948953 | orchestrator | 2026-02-04 00:02:16.948957 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-04 00:02:16.948961 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-04 00:02:16.948965 | orchestrator | + address = (known after apply) 2026-02-04 00:02:16.948970 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.948976 | orchestrator | + dns_domain = (known after apply) 2026-02-04 00:02:16.948980 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.948985 | orchestrator | + fixed_ip = (known after apply) 2026-02-04 00:02:16.948989 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.948993 | orchestrator | + pool = "public" 2026-02-04 00:02:16.948997 | orchestrator | + port_id = (known after apply) 2026-02-04 00:02:16.949001 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949005 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949009 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949013 | orchestrator | } 2026-02-04 00:02:16.949017 | orchestrator | 2026-02-04 00:02:16.949022 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-04 00:02:16.949026 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-04 00:02:16.949030 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949034 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949038 | orchestrator | + availability_zone_hints = [ 2026-02-04 00:02:16.949042 | orchestrator | + "nova", 2026-02-04 00:02:16.949046 | orchestrator | ] 2026-02-04 00:02:16.949050 | orchestrator | + dns_domain = (known after apply) 2026-02-04 00:02:16.949055 | orchestrator | + external = (known after apply) 2026-02-04 00:02:16.949059 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949063 | orchestrator | + mtu = (known after apply) 2026-02-04 00:02:16.949067 | orchestrator | + name = "net-testbed-management" 2026-02-04 00:02:16.949071 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949077 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949082 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949086 | orchestrator | + shared = (known after apply) 2026-02-04 00:02:16.949090 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949094 | orchestrator | + transparent_vlan = (known after apply) 2026-02-04 00:02:16.949098 | orchestrator | 2026-02-04 00:02:16.949102 | orchestrator | + segments (known after apply) 2026-02-04 00:02:16.949106 | orchestrator | } 2026-02-04 00:02:16.949111 | orchestrator | 2026-02-04 00:02:16.949114 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-04 00:02:16.949119 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-04 00:02:16.949123 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949127 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.949131 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.949135 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949139 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.949143 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.949147 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.949151 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.949155 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949160 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.949163 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.949191 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949195 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949199 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949207 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.949211 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949216 | orchestrator | 2026-02-04 00:02:16.949220 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949224 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.949228 | orchestrator | } 2026-02-04 00:02:16.949233 | orchestrator | 2026-02-04 00:02:16.949236 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.949240 | orchestrator | 2026-02-04 00:02:16.949245 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.949249 | orchestrator | + ip_address = "192.168.16.5" 2026-02-04 00:02:16.949253 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949257 | orchestrator | } 2026-02-04 00:02:16.949261 | orchestrator | } 2026-02-04 00:02:16.949265 | orchestrator | 2026-02-04 00:02:16.949269 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-04 00:02:16.949273 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.949277 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949281 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.949285 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.949289 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949293 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.949297 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.949301 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.949305 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.949309 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949313 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.949317 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.949320 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949324 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949328 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949334 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.949338 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949342 | orchestrator | 2026-02-04 00:02:16.949346 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949350 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.949353 | orchestrator | } 2026-02-04 00:02:16.949357 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949361 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.949365 | orchestrator | } 2026-02-04 00:02:16.949368 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949372 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.949376 | orchestrator | } 2026-02-04 00:02:16.949380 | orchestrator | 2026-02-04 00:02:16.949384 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.949387 | orchestrator | 2026-02-04 00:02:16.949391 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.949395 | orchestrator | + ip_address = "192.168.16.10" 2026-02-04 00:02:16.949399 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949402 | orchestrator | } 2026-02-04 00:02:16.949406 | orchestrator | } 2026-02-04 00:02:16.949410 | orchestrator | 2026-02-04 00:02:16.949414 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-04 00:02:16.949418 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.949424 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949429 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.949432 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.949436 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949440 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.949444 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.949448 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.949451 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.949455 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949459 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.949462 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.949466 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949470 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949474 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949478 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.949481 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949485 | orchestrator | 2026-02-04 00:02:16.949489 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949493 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.949496 | orchestrator | } 2026-02-04 00:02:16.949500 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949504 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.949508 | orchestrator | } 2026-02-04 00:02:16.949512 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949515 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.949519 | orchestrator | } 2026-02-04 00:02:16.949523 | orchestrator | 2026-02-04 00:02:16.949527 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.949530 | orchestrator | 2026-02-04 00:02:16.949534 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.949538 | orchestrator | + ip_address = "192.168.16.11" 2026-02-04 00:02:16.949542 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949545 | orchestrator | } 2026-02-04 00:02:16.949549 | orchestrator | } 2026-02-04 00:02:16.949553 | orchestrator | 2026-02-04 00:02:16.949557 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-04 00:02:16.949561 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.949564 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949568 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.949572 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.949576 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949582 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.949586 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.949590 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.949594 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.949597 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949601 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.949605 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.949609 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949612 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949616 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949620 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.949627 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949631 | orchestrator | 2026-02-04 00:02:16.949634 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949638 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.949642 | orchestrator | } 2026-02-04 00:02:16.949646 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949650 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.949653 | orchestrator | } 2026-02-04 00:02:16.949657 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949661 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.949665 | orchestrator | } 2026-02-04 00:02:16.949668 | orchestrator | 2026-02-04 00:02:16.949672 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.949676 | orchestrator | 2026-02-04 00:02:16.949680 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.949684 | orchestrator | + ip_address = "192.168.16.12" 2026-02-04 00:02:16.949687 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949691 | orchestrator | } 2026-02-04 00:02:16.949695 | orchestrator | } 2026-02-04 00:02:16.949699 | orchestrator | 2026-02-04 00:02:16.949703 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-04 00:02:16.949707 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.949711 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949714 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.949718 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.949722 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949726 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.949729 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.949733 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.949737 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.949741 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949744 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.949748 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.949752 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949756 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949760 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949763 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.949767 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949771 | orchestrator | 2026-02-04 00:02:16.949775 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949778 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.949782 | orchestrator | } 2026-02-04 00:02:16.949786 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949790 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.949793 | orchestrator | } 2026-02-04 00:02:16.949797 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949801 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.949805 | orchestrator | } 2026-02-04 00:02:16.949808 | orchestrator | 2026-02-04 00:02:16.949815 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.949819 | orchestrator | 2026-02-04 00:02:16.949823 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.949826 | orchestrator | + ip_address = "192.168.16.13" 2026-02-04 00:02:16.949830 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949834 | orchestrator | } 2026-02-04 00:02:16.949838 | orchestrator | } 2026-02-04 00:02:16.949841 | orchestrator | 2026-02-04 00:02:16.949845 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-04 00:02:16.949849 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.949853 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.949857 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.949860 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.949864 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.949868 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.949872 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.949875 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.949879 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.949885 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.949889 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.949893 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.949896 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.949900 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.949904 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.949908 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.949911 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.949915 | orchestrator | 2026-02-04 00:02:16.949919 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949925 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.949929 | orchestrator | } 2026-02-04 00:02:16.949933 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949937 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.949940 | orchestrator | } 2026-02-04 00:02:16.949944 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.949948 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.949952 | orchestrator | } 2026-02-04 00:02:16.949955 | orchestrator | 2026-02-04 00:02:16.949959 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.949963 | orchestrator | 2026-02-04 00:02:16.949967 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.949971 | orchestrator | + ip_address = "192.168.16.14" 2026-02-04 00:02:16.949975 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.949978 | orchestrator | } 2026-02-04 00:02:16.949982 | orchestrator | } 2026-02-04 00:02:16.949986 | orchestrator | 2026-02-04 00:02:16.949990 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-04 00:02:16.949994 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 00:02:16.949998 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.950001 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 00:02:16.950005 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 00:02:16.950009 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.950028 | orchestrator | + device_id = (known after apply) 2026-02-04 00:02:16.950032 | orchestrator | + device_owner = (known after apply) 2026-02-04 00:02:16.950036 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 00:02:16.950039 | orchestrator | + dns_name = (known after apply) 2026-02-04 00:02:16.950043 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950047 | orchestrator | + mac_address = (known after apply) 2026-02-04 00:02:16.950051 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.950055 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 00:02:16.950058 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 00:02:16.950068 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950072 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 00:02:16.950076 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950080 | orchestrator | 2026-02-04 00:02:16.950083 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.950087 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 00:02:16.950091 | orchestrator | } 2026-02-04 00:02:16.950095 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.950099 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 00:02:16.950102 | orchestrator | } 2026-02-04 00:02:16.950106 | orchestrator | + allowed_address_pairs { 2026-02-04 00:02:16.950110 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 00:02:16.950114 | orchestrator | } 2026-02-04 00:02:16.950117 | orchestrator | 2026-02-04 00:02:16.950121 | orchestrator | + binding (known after apply) 2026-02-04 00:02:16.950125 | orchestrator | 2026-02-04 00:02:16.950129 | orchestrator | + fixed_ip { 2026-02-04 00:02:16.950133 | orchestrator | + ip_address = "192.168.16.15" 2026-02-04 00:02:16.950136 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.950140 | orchestrator | } 2026-02-04 00:02:16.950144 | orchestrator | } 2026-02-04 00:02:16.950148 | orchestrator | 2026-02-04 00:02:16.950152 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-04 00:02:16.950155 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-04 00:02:16.950159 | orchestrator | + force_destroy = false 2026-02-04 00:02:16.950163 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950180 | orchestrator | + port_id = (known after apply) 2026-02-04 00:02:16.950184 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950188 | orchestrator | + router_id = (known after apply) 2026-02-04 00:02:16.950192 | orchestrator | + subnet_id = (known after apply) 2026-02-04 00:02:16.950195 | orchestrator | } 2026-02-04 00:02:16.950199 | orchestrator | 2026-02-04 00:02:16.950203 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-04 00:02:16.950207 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-04 00:02:16.950211 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 00:02:16.950214 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.950218 | orchestrator | + availability_zone_hints = [ 2026-02-04 00:02:16.950222 | orchestrator | + "nova", 2026-02-04 00:02:16.950226 | orchestrator | ] 2026-02-04 00:02:16.950230 | orchestrator | + distributed = (known after apply) 2026-02-04 00:02:16.950233 | orchestrator | + enable_snat = (known after apply) 2026-02-04 00:02:16.950237 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-04 00:02:16.950241 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-04 00:02:16.950245 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950248 | orchestrator | + name = "testbed" 2026-02-04 00:02:16.950252 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950256 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950260 | orchestrator | 2026-02-04 00:02:16.950263 | orchestrator | + external_fixed_ip (known after apply) 2026-02-04 00:02:16.950267 | orchestrator | } 2026-02-04 00:02:16.950271 | orchestrator | 2026-02-04 00:02:16.950275 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-04 00:02:16.950279 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-04 00:02:16.950283 | orchestrator | + description = "ssh" 2026-02-04 00:02:16.950287 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950290 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950294 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950298 | orchestrator | + port_range_max = 22 2026-02-04 00:02:16.950302 | orchestrator | + port_range_min = 22 2026-02-04 00:02:16.950305 | orchestrator | + protocol = "tcp" 2026-02-04 00:02:16.950309 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950316 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950320 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950324 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950328 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950331 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950335 | orchestrator | } 2026-02-04 00:02:16.950339 | orchestrator | 2026-02-04 00:02:16.950343 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-04 00:02:16.950347 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-04 00:02:16.950351 | orchestrator | + description = "wireguard" 2026-02-04 00:02:16.950355 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950359 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950362 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950366 | orchestrator | + port_range_max = 51820 2026-02-04 00:02:16.950370 | orchestrator | + port_range_min = 51820 2026-02-04 00:02:16.950374 | orchestrator | + protocol = "udp" 2026-02-04 00:02:16.950378 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950381 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950385 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950389 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950393 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950396 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950400 | orchestrator | } 2026-02-04 00:02:16.950404 | orchestrator | 2026-02-04 00:02:16.950408 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-04 00:02:16.950412 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-04 00:02:16.950418 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950422 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950426 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950430 | orchestrator | + protocol = "tcp" 2026-02-04 00:02:16.950433 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950437 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950441 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950445 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-04 00:02:16.950448 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950452 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950456 | orchestrator | } 2026-02-04 00:02:16.950460 | orchestrator | 2026-02-04 00:02:16.950466 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-04 00:02:16.950470 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-04 00:02:16.950474 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950478 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950481 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950485 | orchestrator | + protocol = "udp" 2026-02-04 00:02:16.950489 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950493 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950497 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950500 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-04 00:02:16.950504 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950508 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950512 | orchestrator | } 2026-02-04 00:02:16.950515 | orchestrator | 2026-02-04 00:02:16.950519 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-04 00:02:16.950526 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-04 00:02:16.950530 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950533 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950537 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950541 | orchestrator | + protocol = "icmp" 2026-02-04 00:02:16.950545 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950549 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950552 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950556 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950560 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950564 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950567 | orchestrator | } 2026-02-04 00:02:16.950571 | orchestrator | 2026-02-04 00:02:16.950575 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-04 00:02:16.950579 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-04 00:02:16.950583 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950587 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950590 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950594 | orchestrator | + protocol = "tcp" 2026-02-04 00:02:16.950598 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950602 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950605 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950609 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950613 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950617 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950620 | orchestrator | } 2026-02-04 00:02:16.950624 | orchestrator | 2026-02-04 00:02:16.950628 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-04 00:02:16.950632 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-04 00:02:16.950636 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950639 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950643 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950647 | orchestrator | + protocol = "udp" 2026-02-04 00:02:16.950651 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950655 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950658 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950662 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950666 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950670 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950674 | orchestrator | } 2026-02-04 00:02:16.950677 | orchestrator | 2026-02-04 00:02:16.950681 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-04 00:02:16.950685 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-04 00:02:16.950689 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950693 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950696 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950700 | orchestrator | + protocol = "icmp" 2026-02-04 00:02:16.950704 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950708 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950711 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950715 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950719 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950723 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950732 | orchestrator | } 2026-02-04 00:02:16.950735 | orchestrator | 2026-02-04 00:02:16.950739 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-04 00:02:16.950743 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-04 00:02:16.950747 | orchestrator | + description = "vrrp" 2026-02-04 00:02:16.950751 | orchestrator | + direction = "ingress" 2026-02-04 00:02:16.950754 | orchestrator | + ethertype = "IPv4" 2026-02-04 00:02:16.950758 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950762 | orchestrator | + protocol = "112" 2026-02-04 00:02:16.950766 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950769 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 00:02:16.950773 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 00:02:16.950777 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 00:02:16.950781 | orchestrator | + security_group_id = (known after apply) 2026-02-04 00:02:16.950785 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950788 | orchestrator | } 2026-02-04 00:02:16.950792 | orchestrator | 2026-02-04 00:02:16.950799 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-04 00:02:16.950803 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-04 00:02:16.950807 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.950810 | orchestrator | + description = "management security group" 2026-02-04 00:02:16.950814 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950818 | orchestrator | + name = "testbed-management" 2026-02-04 00:02:16.950822 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950825 | orchestrator | + stateful = (known after apply) 2026-02-04 00:02:16.950829 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950833 | orchestrator | } 2026-02-04 00:02:16.950837 | orchestrator | 2026-02-04 00:02:16.950840 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-04 00:02:16.950844 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-04 00:02:16.950848 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.950852 | orchestrator | + description = "node security group" 2026-02-04 00:02:16.950855 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950859 | orchestrator | + name = "testbed-node" 2026-02-04 00:02:16.950863 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950867 | orchestrator | + stateful = (known after apply) 2026-02-04 00:02:16.950870 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950874 | orchestrator | } 2026-02-04 00:02:16.950878 | orchestrator | 2026-02-04 00:02:16.950882 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-04 00:02:16.950886 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-04 00:02:16.950889 | orchestrator | + all_tags = (known after apply) 2026-02-04 00:02:16.950893 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-04 00:02:16.950897 | orchestrator | + dns_nameservers = [ 2026-02-04 00:02:16.950901 | orchestrator | + "8.8.8.8", 2026-02-04 00:02:16.950905 | orchestrator | + "9.9.9.9", 2026-02-04 00:02:16.950909 | orchestrator | ] 2026-02-04 00:02:16.950912 | orchestrator | + enable_dhcp = true 2026-02-04 00:02:16.950916 | orchestrator | + gateway_ip = (known after apply) 2026-02-04 00:02:16.950922 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.950926 | orchestrator | + ip_version = 4 2026-02-04 00:02:16.950930 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-04 00:02:16.950934 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-04 00:02:16.950938 | orchestrator | + name = "subnet-testbed-management" 2026-02-04 00:02:16.950942 | orchestrator | + network_id = (known after apply) 2026-02-04 00:02:16.950945 | orchestrator | + no_gateway = false 2026-02-04 00:02:16.950949 | orchestrator | + region = (known after apply) 2026-02-04 00:02:16.950953 | orchestrator | + service_types = (known after apply) 2026-02-04 00:02:16.950960 | orchestrator | + tenant_id = (known after apply) 2026-02-04 00:02:16.950964 | orchestrator | 2026-02-04 00:02:16.950968 | orchestrator | + allocation_pool { 2026-02-04 00:02:16.950971 | orchestrator | + end = "192.168.31.250" 2026-02-04 00:02:16.950975 | orchestrator | + start = "192.168.31.200" 2026-02-04 00:02:16.950979 | orchestrator | } 2026-02-04 00:02:16.950983 | orchestrator | } 2026-02-04 00:02:16.950987 | orchestrator | 2026-02-04 00:02:16.950990 | orchestrator | # terraform_data.image will be created 2026-02-04 00:02:16.950994 | orchestrator | + resource "terraform_data" "image" { 2026-02-04 00:02:16.950998 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.951002 | orchestrator | + input = "Ubuntu 24.04" 2026-02-04 00:02:16.951006 | orchestrator | + output = (known after apply) 2026-02-04 00:02:16.951010 | orchestrator | } 2026-02-04 00:02:16.951014 | orchestrator | 2026-02-04 00:02:16.951017 | orchestrator | # terraform_data.image_node will be created 2026-02-04 00:02:16.951021 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-04 00:02:16.951025 | orchestrator | + id = (known after apply) 2026-02-04 00:02:16.951029 | orchestrator | + input = "Ubuntu 24.04" 2026-02-04 00:02:16.951032 | orchestrator | + output = (known after apply) 2026-02-04 00:02:16.951036 | orchestrator | } 2026-02-04 00:02:16.951040 | orchestrator | 2026-02-04 00:02:16.951044 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-04 00:02:16.951048 | orchestrator | 2026-02-04 00:02:16.951051 | orchestrator | Changes to Outputs: 2026-02-04 00:02:16.951055 | orchestrator | + manager_address = (sensitive value) 2026-02-04 00:02:16.951059 | orchestrator | + private_key = (sensitive value) 2026-02-04 00:02:17.838099 | orchestrator | terraform_data.image: Creating... 2026-02-04 00:02:17.838149 | orchestrator | terraform_data.image: Creation complete after 0s [id=fd19063c-cb01-f7cf-685a-3834a00d0e0c] 2026-02-04 00:02:17.839916 | orchestrator | terraform_data.image_node: Creating... 2026-02-04 00:02:17.839952 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e05dbfd6-a73b-8aa2-c5fc-ad2a25f2ff9c] 2026-02-04 00:02:17.854356 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-04 00:02:17.858993 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-04 00:02:17.860067 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-04 00:02:17.861068 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-04 00:02:17.864995 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-04 00:02:17.871767 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-04 00:02:17.871807 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-04 00:02:17.871813 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-04 00:02:17.871818 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-04 00:02:17.874988 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-04 00:02:18.342795 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-04 00:02:18.350304 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-04 00:02:18.412695 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-04 00:02:18.418101 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-04 00:02:18.970072 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=2803e18b-1938-4074-a77b-fc9f435e2fb2] 2026-02-04 00:02:18.971458 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-04 00:02:19.032690 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-04 00:02:19.040654 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-04 00:02:21.602578 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=e6547550-6f0e-4316-b715-af657c75c64a] 2026-02-04 00:02:21.618575 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-04 00:02:21.624128 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=4f01749b863e0072f61c5603b8a18faa16034213] 2026-02-04 00:02:21.634127 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-04 00:02:21.640403 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=1353e95c31a8260115ad5e809245033c79b7a9e5] 2026-02-04 00:02:21.644190 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-04 00:02:21.657782 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=70272979-0540-4b40-8ef0-41f73c6a4a5a] 2026-02-04 00:02:21.661468 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81] 2026-02-04 00:02:21.668498 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-04 00:02:21.668939 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=330cb526-2149-4826-b513-02c8e88ca89e] 2026-02-04 00:02:21.670211 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-04 00:02:21.692151 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=6b2cce40-d718-4f99-a243-3b703c717e59] 2026-02-04 00:02:21.693013 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-04 00:02:21.702134 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-04 00:02:21.704094 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=5b592fbb-955b-4fdf-b12f-717d86698fde] 2026-02-04 00:02:21.712586 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=1679d905-c182-4dcb-a16f-ff388fb87fa8] 2026-02-04 00:02:21.713233 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-04 00:02:21.716765 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=6b00b999-8e8e-4579-a93c-a7b8030012f4] 2026-02-04 00:02:21.717239 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-04 00:02:21.737677 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=b014772c-38b5-4caa-9603-223bc8ef3a74] 2026-02-04 00:02:22.436549 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=130a642a-2400-431a-8a72-8f1b2886ed0c] 2026-02-04 00:02:22.625104 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=83bc4005-7777-4ddf-b793-20b9237d3426] 2026-02-04 00:02:22.634570 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-04 00:02:25.182762 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=ac95cfef-965b-47de-974f-7b957b3140f3] 2026-02-04 00:02:25.204442 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=80f63be7-f780-4338-9bbf-82469273ebcd] 2026-02-04 00:02:25.239420 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016] 2026-02-04 00:02:25.251331 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=6f34b523-bd6d-4929-b1b4-04af8dcf542b] 2026-02-04 00:02:25.265532 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=2836c5f1-c587-4b9a-8d47-e8c2679ad004] 2026-02-04 00:02:25.293138 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=a77de84b-d98a-4a0a-b405-24b611969fa7] 2026-02-04 00:02:25.836659 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=e6cdc557-1dfd-452d-9f98-34b7146a7d6b] 2026-02-04 00:02:27.272429 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-04 00:02:27.272463 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-04 00:02:27.272475 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-04 00:02:27.272487 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=ea072b15-6280-4b0e-8fbd-d116e6550e73] 2026-02-04 00:02:27.272503 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-04 00:02:27.272525 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-04 00:02:27.272544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-04 00:02:27.272582 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-04 00:02:27.272603 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-04 00:02:27.272623 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-04 00:02:27.272644 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=2e333206-754d-4817-9a65-56c97f04f3e8] 2026-02-04 00:02:27.272665 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-04 00:02:27.272684 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-04 00:02:27.272725 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-04 00:02:27.272746 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=97a17447-bb91-4c0d-827b-5d6d5fc54f37] 2026-02-04 00:02:27.272768 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-04 00:02:27.272788 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=dfb9c08d-6fab-4b48-9d1e-5ebcfaaf3a6b] 2026-02-04 00:02:27.272807 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-04 00:02:27.272828 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=56c38bf3-c2db-4162-87cb-8fc5a5edc959] 2026-02-04 00:02:27.272846 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-04 00:02:27.272866 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b315e7da-165d-4b59-86ef-98166a43e9d7] 2026-02-04 00:02:27.272887 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-04 00:02:27.272907 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=23ec379e-c41e-4ae1-bc80-41823c891ff9] 2026-02-04 00:02:27.272922 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-04 00:02:27.272934 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=890ae4c6-4787-45bc-aff1-fa8360325b77] 2026-02-04 00:02:27.272945 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-04 00:02:27.686997 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=e86de680-6216-4ee0-aeaa-75bd7b80689a] 2026-02-04 00:02:27.694408 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-04 00:02:27.781945 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=fa19e430-2f19-426f-ab44-644707bc0348] 2026-02-04 00:02:27.861693 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=5b25f043-4b92-40ac-bfc9-36c50ec61aa0] 2026-02-04 00:02:27.891050 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=518c514a-6b4e-4acf-97a2-c6738208c048] 2026-02-04 00:02:27.932065 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=d4e00738-ea81-4fd0-b862-399ec5929814] 2026-02-04 00:02:28.004105 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=8aeb47ba-58d7-4d9c-8b0d-861e330caeec] 2026-02-04 00:02:28.011538 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 2s [id=b94e2312-cea4-4e7e-b082-682cb85cf262] 2026-02-04 00:02:28.055386 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=b30e108c-827e-40a9-b63a-406b29f09ed9] 2026-02-04 00:02:28.284101 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 2s [id=e2656e7e-2ffe-451d-85c5-b64f1b8fba2a] 2026-02-04 00:02:29.350087 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 2s [id=ccdff333-34bb-4d73-8ab7-22d21a80311a] 2026-02-04 00:02:32.252754 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 6s [id=1027dfd7-be0c-4982-b241-3cc5af060c63] 2026-02-04 00:02:32.472286 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-04 00:02:32.472362 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-04 00:02:32.472378 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-04 00:02:32.472390 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-04 00:02:32.472402 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-04 00:02:32.472413 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-04 00:02:32.472424 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-04 00:02:34.015346 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=4fb5811e-1e59-4655-ad9d-71150bd06cb3] 2026-02-04 00:02:34.026984 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-04 00:02:34.030106 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-04 00:02:34.032001 | orchestrator | local_file.inventory: Creating... 2026-02-04 00:02:34.034943 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=d4e7a1fb082371060750bbe98a2c4981c31db8ab] 2026-02-04 00:02:34.040027 | orchestrator | local_file.inventory: Creation complete after 0s [id=6eba1514357672ec0dfcc4cbfd9f48a954565a5c] 2026-02-04 00:02:34.809942 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4fb5811e-1e59-4655-ad9d-71150bd06cb3] 2026-02-04 00:02:42.290174 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-04 00:02:42.290256 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-04 00:02:42.291511 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-04 00:02:42.291728 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-04 00:02:42.307890 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-04 00:02:42.311263 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-04 00:02:52.299940 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-04 00:02:52.300149 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-04 00:02:52.300168 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-04 00:02:52.300182 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-04 00:02:52.308257 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-04 00:02:52.311409 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-04 00:03:02.309498 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-04 00:03:02.309597 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-04 00:03:02.309608 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-04 00:03:02.309616 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-04 00:03:02.309634 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-04 00:03:02.311728 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-04 00:03:02.952365 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=5d53fc23-d8d5-4505-9f7a-07b39b906fd1] 2026-02-04 00:03:12.309757 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-04 00:03:12.309865 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-04 00:03:12.309876 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-02-04 00:03:12.309893 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-02-04 00:03:12.312136 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-04 00:03:22.310369 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [50s elapsed] 2026-02-04 00:03:22.310467 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-02-04 00:03:22.310479 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-02-04 00:03:22.310486 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-02-04 00:03:22.312625 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-02-04 00:03:23.236462 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 51s [id=04335704-ba97-4034-b340-dbfb342c7d09] 2026-02-04 00:03:23.434390 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 51s [id=8c1aa142-5ee3-426d-8528-1c9148b4eedb] 2026-02-04 00:03:32.319166 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [1m0s elapsed] 2026-02-04 00:03:32.319227 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [1m0s elapsed] 2026-02-04 00:03:32.319234 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [1m0s elapsed] 2026-02-04 00:03:33.068132 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 1m1s [id=87eecd08-c2ee-4b1f-9c1e-9e3e6e6632cf] 2026-02-04 00:03:33.469429 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 1m1s [id=b7110f24-e56f-4d9e-8572-edc9c926cde7] 2026-02-04 00:03:33.504274 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 1m2s [id=9c1332f0-f0c7-4c68-8fd1-b4771ac24daa] 2026-02-04 00:03:33.524721 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-04 00:03:33.532415 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7104891386494949341] 2026-02-04 00:03:33.536735 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-04 00:03:33.536791 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-04 00:03:33.536825 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-04 00:03:33.536891 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-04 00:03:33.536942 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-04 00:03:33.537114 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-04 00:03:33.538194 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-04 00:03:33.540668 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-04 00:03:33.540749 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-04 00:03:33.565921 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-04 00:03:36.912750 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=b7110f24-e56f-4d9e-8572-edc9c926cde7/6b2cce40-d718-4f99-a243-3b703c717e59] 2026-02-04 00:03:36.937962 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=87eecd08-c2ee-4b1f-9c1e-9e3e6e6632cf/5b592fbb-955b-4fdf-b12f-717d86698fde] 2026-02-04 00:03:36.952007 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=9c1332f0-f0c7-4c68-8fd1-b4771ac24daa/f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81] 2026-02-04 00:03:36.964260 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=87eecd08-c2ee-4b1f-9c1e-9e3e6e6632cf/70272979-0540-4b40-8ef0-41f73c6a4a5a] 2026-02-04 00:03:36.971835 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=b7110f24-e56f-4d9e-8572-edc9c926cde7/e6547550-6f0e-4316-b715-af657c75c64a] 2026-02-04 00:03:36.993956 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=9c1332f0-f0c7-4c68-8fd1-b4771ac24daa/6b00b999-8e8e-4579-a93c-a7b8030012f4] 2026-02-04 00:03:43.085367 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=87eecd08-c2ee-4b1f-9c1e-9e3e6e6632cf/b014772c-38b5-4caa-9603-223bc8ef3a74] 2026-02-04 00:03:43.107952 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=b7110f24-e56f-4d9e-8572-edc9c926cde7/330cb526-2149-4826-b513-02c8e88ca89e] 2026-02-04 00:03:43.120185 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=9c1332f0-f0c7-4c68-8fd1-b4771ac24daa/1679d905-c182-4dcb-a16f-ff388fb87fa8] 2026-02-04 00:03:43.566947 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-04 00:03:53.568234 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-04 00:03:54.056091 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=ceab311f-a622-4403-b447-3e17be1db210] 2026-02-04 00:03:54.070321 | orchestrator | 2026-02-04 00:03:54.070372 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-04 00:03:54.070379 | orchestrator | 2026-02-04 00:03:54.070385 | orchestrator | Outputs: 2026-02-04 00:03:54.070397 | orchestrator | 2026-02-04 00:03:54.070412 | orchestrator | manager_address = 2026-02-04 00:03:54.070417 | orchestrator | private_key = 2026-02-04 00:03:54.532081 | orchestrator | ok: Runtime: 0:01:42.876031 2026-02-04 00:03:54.564444 | 2026-02-04 00:03:54.564573 | TASK [Fetch manager address] 2026-02-04 00:03:55.022748 | orchestrator | ok 2026-02-04 00:03:55.033162 | 2026-02-04 00:03:55.033367 | TASK [Set manager_host address] 2026-02-04 00:03:55.114125 | orchestrator | ok 2026-02-04 00:03:55.125965 | 2026-02-04 00:03:55.126116 | LOOP [Update ansible collections] 2026-02-04 00:03:56.016134 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:03:56.016520 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-04 00:03:56.016581 | orchestrator | Starting galaxy collection install process 2026-02-04 00:03:56.016622 | orchestrator | Process install dependency map 2026-02-04 00:03:56.016658 | orchestrator | Starting collection install process 2026-02-04 00:03:56.016691 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2026-02-04 00:03:56.016731 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2026-02-04 00:03:56.016801 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-04 00:03:56.016886 | orchestrator | ok: Item: commons Runtime: 0:00:00.566231 2026-02-04 00:03:58.241288 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-04 00:03:58.241461 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:03:58.241515 | orchestrator | Starting galaxy collection install process 2026-02-04 00:03:58.241555 | orchestrator | Process install dependency map 2026-02-04 00:03:58.241591 | orchestrator | Starting collection install process 2026-02-04 00:03:58.241625 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2026-02-04 00:03:58.241659 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2026-02-04 00:03:58.241691 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-04 00:03:58.241744 | orchestrator | ok: Item: services Runtime: 0:00:01.974404 2026-02-04 00:03:58.266911 | 2026-02-04 00:03:58.267088 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-04 00:04:08.840793 | orchestrator | ok 2026-02-04 00:04:08.850720 | 2026-02-04 00:04:08.850851 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-04 00:05:08.896543 | orchestrator | ok 2026-02-04 00:05:08.908263 | 2026-02-04 00:05:08.908434 | TASK [Fetch manager ssh hostkey] 2026-02-04 00:05:10.487975 | orchestrator | Output suppressed because no_log was given 2026-02-04 00:05:10.495304 | 2026-02-04 00:05:10.495438 | TASK [Get ssh keypair from terraform environment] 2026-02-04 00:05:11.031872 | orchestrator | ok: Runtime: 0:00:00.005735 2026-02-04 00:05:11.046693 | 2026-02-04 00:05:11.046901 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-04 00:05:11.083409 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-04 00:05:11.093955 | 2026-02-04 00:05:11.094103 | TASK [Run manager part 0] 2026-02-04 00:05:12.156899 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:05:12.212108 | orchestrator | 2026-02-04 00:05:12.212157 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-04 00:05:12.212164 | orchestrator | 2026-02-04 00:05:12.212179 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-04 00:05:13.922144 | orchestrator | ok: [testbed-manager] 2026-02-04 00:05:13.922197 | orchestrator | 2026-02-04 00:05:13.922220 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-04 00:05:13.922229 | orchestrator | 2026-02-04 00:05:13.922238 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:05:15.781748 | orchestrator | ok: [testbed-manager] 2026-02-04 00:05:15.781794 | orchestrator | 2026-02-04 00:05:15.781801 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-04 00:05:16.453366 | orchestrator | ok: [testbed-manager] 2026-02-04 00:05:16.453422 | orchestrator | 2026-02-04 00:05:16.453435 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-04 00:05:16.502565 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.502606 | orchestrator | 2026-02-04 00:05:16.502615 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-04 00:05:16.534547 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.534599 | orchestrator | 2026-02-04 00:05:16.534611 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-04 00:05:16.563503 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.563555 | orchestrator | 2026-02-04 00:05:16.563565 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-04 00:05:16.595472 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.595520 | orchestrator | 2026-02-04 00:05:16.595530 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-04 00:05:16.629327 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.629378 | orchestrator | 2026-02-04 00:05:16.629392 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-04 00:05:16.661359 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.661403 | orchestrator | 2026-02-04 00:05:16.661411 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-04 00:05:16.693469 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:05:16.693512 | orchestrator | 2026-02-04 00:05:16.693520 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-04 00:05:17.386178 | orchestrator | changed: [testbed-manager] 2026-02-04 00:05:17.386234 | orchestrator | 2026-02-04 00:05:17.386245 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-04 00:08:03.264787 | orchestrator | changed: [testbed-manager] 2026-02-04 00:08:03.264907 | orchestrator | 2026-02-04 00:08:03.264929 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-04 00:09:35.447635 | orchestrator | changed: [testbed-manager] 2026-02-04 00:09:35.447733 | orchestrator | 2026-02-04 00:09:35.447750 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-04 00:10:01.566772 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:01.566874 | orchestrator | 2026-02-04 00:10:01.566896 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-04 00:10:12.182187 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:12.182249 | orchestrator | 2026-02-04 00:10:12.182259 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-04 00:10:12.237569 | orchestrator | ok: [testbed-manager] 2026-02-04 00:10:12.237654 | orchestrator | 2026-02-04 00:10:12.237665 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-04 00:10:13.072240 | orchestrator | ok: [testbed-manager] 2026-02-04 00:10:13.072353 | orchestrator | 2026-02-04 00:10:13.072408 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-04 00:10:13.808201 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:13.808294 | orchestrator | 2026-02-04 00:10:13.808312 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-04 00:10:22.138319 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:22.138591 | orchestrator | 2026-02-04 00:10:22.138644 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-04 00:10:27.950630 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:27.950732 | orchestrator | 2026-02-04 00:10:27.950753 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-04 00:10:30.419850 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:30.419948 | orchestrator | 2026-02-04 00:10:30.419964 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-04 00:10:32.153615 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:32.153713 | orchestrator | 2026-02-04 00:10:32.153737 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-04 00:10:33.219563 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-04 00:10:33.220098 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-04 00:10:33.220139 | orchestrator | 2026-02-04 00:10:33.220228 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-04 00:10:33.265265 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-04 00:10:33.265383 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-04 00:10:33.265408 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-04 00:10:33.265430 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-04 00:10:41.501585 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-04 00:10:41.501703 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-04 00:10:41.501730 | orchestrator | 2026-02-04 00:10:41.501751 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-04 00:10:42.075976 | orchestrator | changed: [testbed-manager] 2026-02-04 00:10:42.076068 | orchestrator | 2026-02-04 00:10:42.076085 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-04 00:12:05.318340 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-04 00:12:05.318420 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-04 00:12:05.318431 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-04 00:12:05.318442 | orchestrator | 2026-02-04 00:12:05.318454 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-04 00:12:07.599547 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-04 00:12:07.599642 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-04 00:12:07.599657 | orchestrator | 2026-02-04 00:12:07.599670 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-04 00:12:07.599682 | orchestrator | 2026-02-04 00:12:07.599694 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:12:08.955548 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:08.955639 | orchestrator | 2026-02-04 00:12:08.955661 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-04 00:12:09.009546 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:09.009618 | orchestrator | 2026-02-04 00:12:09.009630 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-04 00:12:09.093213 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:09.093294 | orchestrator | 2026-02-04 00:12:09.093310 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-04 00:12:09.887260 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:09.887360 | orchestrator | 2026-02-04 00:12:09.887378 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-04 00:12:10.647906 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:10.647999 | orchestrator | 2026-02-04 00:12:10.648015 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-04 00:12:12.114234 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-04 00:12:12.114365 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-04 00:12:12.114388 | orchestrator | 2026-02-04 00:12:12.114455 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-04 00:12:13.479341 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:13.479428 | orchestrator | 2026-02-04 00:12:13.479442 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-04 00:12:15.244401 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:12:15.244513 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-04 00:12:15.244539 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:12:15.244558 | orchestrator | 2026-02-04 00:12:15.244579 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-04 00:12:15.308468 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:15.308562 | orchestrator | 2026-02-04 00:12:15.308580 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-04 00:12:15.395127 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:15.395820 | orchestrator | 2026-02-04 00:12:15.395884 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-04 00:12:15.956336 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:15.956390 | orchestrator | 2026-02-04 00:12:15.956398 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-04 00:12:16.034556 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:16.034595 | orchestrator | 2026-02-04 00:12:16.034601 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-04 00:12:16.885044 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:12:16.885111 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:16.885120 | orchestrator | 2026-02-04 00:12:16.885126 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-04 00:12:16.925706 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:16.925747 | orchestrator | 2026-02-04 00:12:16.925755 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-04 00:12:16.963015 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:16.963125 | orchestrator | 2026-02-04 00:12:16.963142 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-04 00:12:16.992222 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:16.992302 | orchestrator | 2026-02-04 00:12:16.992320 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-04 00:12:17.060931 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:17.060970 | orchestrator | 2026-02-04 00:12:17.060976 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-04 00:12:17.773301 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:17.773347 | orchestrator | 2026-02-04 00:12:17.773356 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-04 00:12:17.773364 | orchestrator | 2026-02-04 00:12:17.773370 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:12:19.162629 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:19.162697 | orchestrator | 2026-02-04 00:12:19.162715 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-04 00:12:20.112176 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:20.112242 | orchestrator | 2026-02-04 00:12:20.112258 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:12:20.112361 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-04 00:12:20.112402 | orchestrator | 2026-02-04 00:12:20.435750 | orchestrator | ok: Runtime: 0:07:08.803369 2026-02-04 00:12:20.452526 | 2026-02-04 00:12:20.452650 | TASK [Point out that the log in on the manager is now possible] 2026-02-04 00:12:20.490961 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-04 00:12:20.500092 | 2026-02-04 00:12:20.500216 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-04 00:12:20.539438 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-04 00:12:20.549592 | 2026-02-04 00:12:20.549749 | TASK [Run manager part 1 + 2] 2026-02-04 00:12:21.893318 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 00:12:21.953580 | orchestrator | 2026-02-04 00:12:21.953624 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-04 00:12:21.953631 | orchestrator | 2026-02-04 00:12:21.953644 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:12:24.837295 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:24.837462 | orchestrator | 2026-02-04 00:12:24.837521 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-04 00:12:24.875534 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:24.875608 | orchestrator | 2026-02-04 00:12:24.875624 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-04 00:12:24.913803 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:24.913904 | orchestrator | 2026-02-04 00:12:24.913926 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 00:12:24.958164 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:24.958249 | orchestrator | 2026-02-04 00:12:24.958266 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 00:12:25.033957 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:25.034036 | orchestrator | 2026-02-04 00:12:25.034045 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 00:12:25.101297 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:25.101358 | orchestrator | 2026-02-04 00:12:25.101368 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 00:12:25.150224 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-04 00:12:25.150311 | orchestrator | 2026-02-04 00:12:25.150325 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 00:12:25.849077 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:25.849198 | orchestrator | 2026-02-04 00:12:25.849228 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 00:12:25.910804 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:25.910898 | orchestrator | 2026-02-04 00:12:25.910922 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 00:12:27.282009 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:27.282145 | orchestrator | 2026-02-04 00:12:27.282156 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 00:12:27.849277 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:27.849339 | orchestrator | 2026-02-04 00:12:27.849349 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 00:12:28.966298 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:28.966356 | orchestrator | 2026-02-04 00:12:28.966365 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 00:12:43.394235 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:43.394323 | orchestrator | 2026-02-04 00:12:43.394339 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-04 00:12:44.074729 | orchestrator | ok: [testbed-manager] 2026-02-04 00:12:44.074831 | orchestrator | 2026-02-04 00:12:44.074857 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-04 00:12:44.146208 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:44.146599 | orchestrator | 2026-02-04 00:12:44.146626 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-04 00:12:45.082772 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:45.082862 | orchestrator | 2026-02-04 00:12:45.083313 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-04 00:12:46.059435 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:46.059506 | orchestrator | 2026-02-04 00:12:46.059515 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-04 00:12:46.644463 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:46.644549 | orchestrator | 2026-02-04 00:12:46.644570 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-04 00:12:46.685500 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-04 00:12:46.685587 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-04 00:12:46.685598 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-04 00:12:46.685607 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-04 00:12:49.649344 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:49.649452 | orchestrator | 2026-02-04 00:12:49.649480 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-04 00:12:58.335702 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-04 00:12:58.335768 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-04 00:12:58.335786 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-04 00:12:58.335799 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-04 00:12:58.335815 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-04 00:12:58.335826 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-04 00:12:58.335837 | orchestrator | 2026-02-04 00:12:58.335850 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-04 00:12:59.353494 | orchestrator | changed: [testbed-manager] 2026-02-04 00:12:59.353606 | orchestrator | 2026-02-04 00:12:59.353634 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-04 00:12:59.399703 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:12:59.399770 | orchestrator | 2026-02-04 00:12:59.399780 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-04 00:13:01.849421 | orchestrator | changed: [testbed-manager] 2026-02-04 00:13:01.849482 | orchestrator | 2026-02-04 00:13:01.849491 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-04 00:13:01.907204 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:13:01.907320 | orchestrator | 2026-02-04 00:13:01.907351 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-04 00:14:35.093197 | orchestrator | changed: [testbed-manager] 2026-02-04 00:14:35.093264 | orchestrator | 2026-02-04 00:14:35.093275 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 00:14:36.176687 | orchestrator | ok: [testbed-manager] 2026-02-04 00:14:36.176763 | orchestrator | 2026-02-04 00:14:36.176776 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:14:36.176788 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-04 00:14:36.176797 | orchestrator | 2026-02-04 00:14:36.673183 | orchestrator | ok: Runtime: 0:02:15.354037 2026-02-04 00:14:36.692335 | 2026-02-04 00:14:36.692468 | TASK [Reboot manager] 2026-02-04 00:14:38.230474 | orchestrator | ok: Runtime: 0:00:00.938414 2026-02-04 00:14:38.247802 | 2026-02-04 00:14:38.247968 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-04 00:14:52.132112 | orchestrator | ok 2026-02-04 00:14:52.141200 | 2026-02-04 00:14:52.141319 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-04 00:15:52.192790 | orchestrator | ok 2026-02-04 00:15:52.202946 | 2026-02-04 00:15:52.203081 | TASK [Deploy manager + bootstrap nodes] 2026-02-04 00:15:54.626977 | orchestrator | 2026-02-04 00:15:54.627154 | orchestrator | # DEPLOY MANAGER 2026-02-04 00:15:54.627180 | orchestrator | 2026-02-04 00:15:54.627196 | orchestrator | + set -e 2026-02-04 00:15:54.627210 | orchestrator | + echo 2026-02-04 00:15:54.627227 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-04 00:15:54.627246 | orchestrator | + echo 2026-02-04 00:15:54.627295 | orchestrator | + cat /opt/manager-vars.sh 2026-02-04 00:15:54.630164 | orchestrator | export NUMBER_OF_NODES=6 2026-02-04 00:15:54.630221 | orchestrator | 2026-02-04 00:15:54.630234 | orchestrator | export CEPH_VERSION=reef 2026-02-04 00:15:54.630248 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-04 00:15:54.630262 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-04 00:15:54.630286 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-04 00:15:54.630298 | orchestrator | 2026-02-04 00:15:54.630317 | orchestrator | export ARA=false 2026-02-04 00:15:54.630329 | orchestrator | export DEPLOY_MODE=manager 2026-02-04 00:15:54.630346 | orchestrator | export TEMPEST=true 2026-02-04 00:15:54.630358 | orchestrator | export IS_ZUUL=true 2026-02-04 00:15:54.630369 | orchestrator | 2026-02-04 00:15:54.630388 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:15:54.630399 | orchestrator | export EXTERNAL_API=false 2026-02-04 00:15:54.630410 | orchestrator | 2026-02-04 00:15:54.630421 | orchestrator | export IMAGE_USER=ubuntu 2026-02-04 00:15:54.630460 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-04 00:15:54.630473 | orchestrator | 2026-02-04 00:15:54.630484 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-04 00:15:54.630504 | orchestrator | 2026-02-04 00:15:54.630515 | orchestrator | + echo 2026-02-04 00:15:54.630528 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 00:15:54.631281 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 00:15:54.631303 | orchestrator | ++ INTERACTIVE=false 2026-02-04 00:15:54.631320 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 00:15:54.631333 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 00:15:54.631503 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 00:15:54.631527 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 00:15:54.631539 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 00:15:54.631647 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 00:15:54.631662 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 00:15:54.631674 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 00:15:54.631685 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 00:15:54.631697 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 00:15:54.631708 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 00:15:54.631723 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 00:15:54.631746 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 00:15:54.631799 | orchestrator | ++ export ARA=false 2026-02-04 00:15:54.631811 | orchestrator | ++ ARA=false 2026-02-04 00:15:54.631822 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 00:15:54.631833 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 00:15:54.631844 | orchestrator | ++ export TEMPEST=true 2026-02-04 00:15:54.631855 | orchestrator | ++ TEMPEST=true 2026-02-04 00:15:54.631866 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 00:15:54.631877 | orchestrator | ++ IS_ZUUL=true 2026-02-04 00:15:54.631888 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:15:54.631899 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:15:54.631910 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 00:15:54.631921 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 00:15:54.631932 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 00:15:54.631947 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 00:15:54.631959 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 00:15:54.631970 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 00:15:54.631981 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 00:15:54.631992 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 00:15:54.632004 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-04 00:15:54.689319 | orchestrator | + docker version 2026-02-04 00:15:54.930129 | orchestrator | Client: Docker Engine - Community 2026-02-04 00:15:54.930213 | orchestrator | Version: 27.5.1 2026-02-04 00:15:54.930228 | orchestrator | API version: 1.47 2026-02-04 00:15:54.930241 | orchestrator | Go version: go1.22.11 2026-02-04 00:15:54.930251 | orchestrator | Git commit: 9f9e405 2026-02-04 00:15:54.930262 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-04 00:15:54.930272 | orchestrator | OS/Arch: linux/amd64 2026-02-04 00:15:54.930282 | orchestrator | Context: default 2026-02-04 00:15:54.930293 | orchestrator | 2026-02-04 00:15:54.930303 | orchestrator | Server: Docker Engine - Community 2026-02-04 00:15:54.930313 | orchestrator | Engine: 2026-02-04 00:15:54.930323 | orchestrator | Version: 27.5.1 2026-02-04 00:15:54.930334 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-04 00:15:54.930375 | orchestrator | Go version: go1.22.11 2026-02-04 00:15:54.930386 | orchestrator | Git commit: 4c9b3b0 2026-02-04 00:15:54.930396 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-04 00:15:54.930405 | orchestrator | OS/Arch: linux/amd64 2026-02-04 00:15:54.930415 | orchestrator | Experimental: false 2026-02-04 00:15:54.930425 | orchestrator | containerd: 2026-02-04 00:15:54.930434 | orchestrator | Version: v2.2.1 2026-02-04 00:15:54.930445 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-04 00:15:54.930455 | orchestrator | runc: 2026-02-04 00:15:54.930465 | orchestrator | Version: 1.3.4 2026-02-04 00:15:54.930475 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-04 00:15:54.930485 | orchestrator | docker-init: 2026-02-04 00:15:54.930494 | orchestrator | Version: 0.19.0 2026-02-04 00:15:54.930505 | orchestrator | GitCommit: de40ad0 2026-02-04 00:15:54.933440 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-04 00:15:54.943647 | orchestrator | + set -e 2026-02-04 00:15:54.943737 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 00:15:54.943784 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 00:15:54.943801 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 00:15:54.943813 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 00:15:54.943824 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 00:15:54.943836 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 00:15:54.943848 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 00:15:54.943860 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 00:15:54.943871 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 00:15:54.943882 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 00:15:54.943893 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 00:15:54.943904 | orchestrator | ++ export ARA=false 2026-02-04 00:15:54.943916 | orchestrator | ++ ARA=false 2026-02-04 00:15:54.943927 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 00:15:54.943939 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 00:15:54.943950 | orchestrator | ++ export TEMPEST=true 2026-02-04 00:15:54.943961 | orchestrator | ++ TEMPEST=true 2026-02-04 00:15:54.943984 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 00:15:54.943996 | orchestrator | ++ IS_ZUUL=true 2026-02-04 00:15:54.944007 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:15:54.944018 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:15:54.944029 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 00:15:54.944040 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 00:15:54.944051 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 00:15:54.944062 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 00:15:54.944072 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 00:15:54.944083 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 00:15:54.944095 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 00:15:54.944106 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 00:15:54.944117 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 00:15:54.944128 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 00:15:54.944139 | orchestrator | ++ INTERACTIVE=false 2026-02-04 00:15:54.944150 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 00:15:54.944165 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 00:15:54.944177 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-04 00:15:54.944188 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-04 00:15:54.950542 | orchestrator | + set -e 2026-02-04 00:15:54.950587 | orchestrator | + VERSION=9.5.0 2026-02-04 00:15:54.950602 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:15:54.957666 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-04 00:15:54.957807 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:15:54.961661 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-04 00:15:54.966153 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-04 00:15:54.975113 | orchestrator | + set -e 2026-02-04 00:15:54.975397 | orchestrator | /opt/configuration ~ 2026-02-04 00:15:54.975421 | orchestrator | + pushd /opt/configuration 2026-02-04 00:15:54.975436 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 00:15:54.976686 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 00:15:54.977872 | orchestrator | ++ deactivate nondestructive 2026-02-04 00:15:54.977897 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:54.978147 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:54.979239 | orchestrator | ++ hash -r 2026-02-04 00:15:54.979274 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:54.979288 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 00:15:54.979304 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 00:15:54.979317 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 00:15:54.979332 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 00:15:54.979346 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 00:15:54.979358 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 00:15:54.979369 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 00:15:54.979381 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:15:54.979395 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:15:54.979415 | orchestrator | ++ export PATH 2026-02-04 00:15:54.979452 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:54.979471 | orchestrator | ++ '[' -z '' ']' 2026-02-04 00:15:54.979502 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 00:15:54.979518 | orchestrator | ++ PS1='(venv) ' 2026-02-04 00:15:54.979536 | orchestrator | ++ export PS1 2026-02-04 00:15:54.979555 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 00:15:54.979573 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 00:15:54.979592 | orchestrator | ++ hash -r 2026-02-04 00:15:54.979609 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-04 00:15:55.941039 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-04 00:15:55.942142 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-04 00:15:55.943516 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-04 00:15:55.944925 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-04 00:15:55.945898 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-04 00:15:55.955698 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-04 00:15:55.957261 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-04 00:15:55.958050 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-04 00:15:55.959379 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-04 00:15:55.987876 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-04 00:15:55.989116 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-04 00:15:55.990901 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-04 00:15:55.992090 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-04 00:15:55.996002 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-04 00:15:56.193997 | orchestrator | ++ which gilt 2026-02-04 00:15:56.198161 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-04 00:15:56.198222 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-04 00:15:56.398252 | orchestrator | osism.cfg-generics: 2026-02-04 00:15:56.548627 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-04 00:15:56.549472 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-04 00:15:56.550423 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-04 00:15:56.550454 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-04 00:15:57.061607 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-04 00:15:57.073813 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-04 00:15:57.379427 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-04 00:15:57.424657 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 00:15:57.424769 | orchestrator | + deactivate 2026-02-04 00:15:57.424783 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 00:15:57.424794 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:15:57.424803 | orchestrator | + export PATH 2026-02-04 00:15:57.424813 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 00:15:57.424822 | orchestrator | + '[' -n '' ']' 2026-02-04 00:15:57.424833 | orchestrator | + hash -r 2026-02-04 00:15:57.424842 | orchestrator | + '[' -n '' ']' 2026-02-04 00:15:57.424851 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 00:15:57.424860 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 00:15:57.424869 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 00:15:57.424878 | orchestrator | + unset -f deactivate 2026-02-04 00:15:57.424897 | orchestrator | + popd 2026-02-04 00:15:57.424907 | orchestrator | ~ 2026-02-04 00:15:57.426866 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-04 00:15:57.426903 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-04 00:15:57.427202 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 00:15:57.477507 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 00:15:57.477606 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-04 00:15:57.478060 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-04 00:15:57.532981 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:15:57.533273 | orchestrator | ++ semver 2024.2 2025.1 2026-02-04 00:15:57.586996 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:15:57.587093 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-04 00:15:57.658414 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 00:15:57.658538 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 00:15:57.658555 | orchestrator | ++ deactivate nondestructive 2026-02-04 00:15:57.658568 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:57.658580 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:57.658591 | orchestrator | ++ hash -r 2026-02-04 00:15:57.658602 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:57.658613 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 00:15:57.658624 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 00:15:57.658636 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 00:15:57.658648 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 00:15:57.658660 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 00:15:57.658683 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 00:15:57.658695 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 00:15:57.658707 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:15:57.658785 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:15:57.658809 | orchestrator | ++ export PATH 2026-02-04 00:15:57.658829 | orchestrator | ++ '[' -n '' ']' 2026-02-04 00:15:57.658848 | orchestrator | ++ '[' -z '' ']' 2026-02-04 00:15:57.658860 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 00:15:57.658870 | orchestrator | ++ PS1='(venv) ' 2026-02-04 00:15:57.658882 | orchestrator | ++ export PS1 2026-02-04 00:15:57.658893 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 00:15:57.658904 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 00:15:57.658915 | orchestrator | ++ hash -r 2026-02-04 00:15:57.658926 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-04 00:15:58.737721 | orchestrator | 2026-02-04 00:15:58.737852 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-04 00:15:58.737867 | orchestrator | 2026-02-04 00:15:58.737877 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 00:15:59.278296 | orchestrator | ok: [testbed-manager] 2026-02-04 00:15:59.278435 | orchestrator | 2026-02-04 00:15:59.278454 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-04 00:16:00.193988 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:00.194137 | orchestrator | 2026-02-04 00:16:00.194163 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-04 00:16:00.194224 | orchestrator | 2026-02-04 00:16:00.194244 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:16:02.316457 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:02.317367 | orchestrator | 2026-02-04 00:16:02.317395 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-04 00:16:02.366190 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:02.366279 | orchestrator | 2026-02-04 00:16:02.366296 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-04 00:16:02.813456 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:02.813577 | orchestrator | 2026-02-04 00:16:02.813608 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-04 00:16:02.852307 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:02.852402 | orchestrator | 2026-02-04 00:16:02.852417 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-04 00:16:03.197772 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:03.197868 | orchestrator | 2026-02-04 00:16:03.197887 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-04 00:16:03.532335 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:03.532508 | orchestrator | 2026-02-04 00:16:03.532528 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-04 00:16:03.668768 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:03.668875 | orchestrator | 2026-02-04 00:16:03.668890 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-04 00:16:03.668902 | orchestrator | 2026-02-04 00:16:03.668912 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:16:05.328298 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:05.328404 | orchestrator | 2026-02-04 00:16:05.328420 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-04 00:16:05.429151 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-04 00:16:05.429240 | orchestrator | 2026-02-04 00:16:05.429256 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-04 00:16:05.484099 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-04 00:16:05.484198 | orchestrator | 2026-02-04 00:16:05.484219 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-04 00:16:06.568645 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-04 00:16:06.568782 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-04 00:16:06.568798 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-04 00:16:06.568811 | orchestrator | 2026-02-04 00:16:06.568825 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-04 00:16:08.352026 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-04 00:16:08.352104 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-04 00:16:08.352115 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-04 00:16:08.352124 | orchestrator | 2026-02-04 00:16:08.352133 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-04 00:16:08.935289 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:16:08.935387 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:08.935405 | orchestrator | 2026-02-04 00:16:08.935418 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-04 00:16:09.486543 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:16:09.486636 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:09.486657 | orchestrator | 2026-02-04 00:16:09.486672 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-04 00:16:09.527039 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:09.527131 | orchestrator | 2026-02-04 00:16:09.527149 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-04 00:16:09.837101 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:09.837194 | orchestrator | 2026-02-04 00:16:09.837212 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-04 00:16:09.896206 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-04 00:16:09.896293 | orchestrator | 2026-02-04 00:16:09.896308 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-04 00:16:10.840812 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:10.840889 | orchestrator | 2026-02-04 00:16:10.840901 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-04 00:16:11.492456 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:11.492547 | orchestrator | 2026-02-04 00:16:11.492567 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-04 00:16:28.322555 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:28.322672 | orchestrator | 2026-02-04 00:16:28.322690 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-04 00:16:28.368936 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:28.369020 | orchestrator | 2026-02-04 00:16:28.369054 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-04 00:16:28.369067 | orchestrator | 2026-02-04 00:16:28.369077 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:16:30.142112 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:30.142216 | orchestrator | 2026-02-04 00:16:30.142233 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-04 00:16:30.243324 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-04 00:16:30.243395 | orchestrator | 2026-02-04 00:16:30.243404 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-04 00:16:30.290276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:16:30.290363 | orchestrator | 2026-02-04 00:16:30.290378 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-04 00:16:32.784904 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:32.784996 | orchestrator | 2026-02-04 00:16:32.785014 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-04 00:16:32.827968 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:32.828079 | orchestrator | 2026-02-04 00:16:32.828095 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-04 00:16:32.945252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-04 00:16:32.945374 | orchestrator | 2026-02-04 00:16:32.945390 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-04 00:16:35.671051 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-04 00:16:35.671160 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-04 00:16:35.671183 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-04 00:16:35.671201 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-04 00:16:35.671219 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-04 00:16:35.671237 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-04 00:16:35.671256 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-04 00:16:35.671274 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-04 00:16:35.671291 | orchestrator | 2026-02-04 00:16:35.671309 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-04 00:16:36.274133 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:36.274232 | orchestrator | 2026-02-04 00:16:36.274253 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-04 00:16:36.895778 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:36.895883 | orchestrator | 2026-02-04 00:16:36.895902 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-04 00:16:36.968225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-04 00:16:36.968304 | orchestrator | 2026-02-04 00:16:36.968317 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-04 00:16:38.174233 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-04 00:16:38.174341 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-04 00:16:38.174361 | orchestrator | 2026-02-04 00:16:38.174376 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-04 00:16:38.797372 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:38.797507 | orchestrator | 2026-02-04 00:16:38.797550 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-04 00:16:38.853750 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:38.853859 | orchestrator | 2026-02-04 00:16:38.853882 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-04 00:16:38.923945 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-04 00:16:38.924015 | orchestrator | 2026-02-04 00:16:38.924025 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-04 00:16:39.502208 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:39.502290 | orchestrator | 2026-02-04 00:16:39.502302 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-04 00:16:39.560579 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-04 00:16:39.560674 | orchestrator | 2026-02-04 00:16:39.560690 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-04 00:16:40.864599 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:16:40.864705 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:16:40.864721 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:40.864733 | orchestrator | 2026-02-04 00:16:40.864745 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-04 00:16:41.462499 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:41.462596 | orchestrator | 2026-02-04 00:16:41.462615 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-04 00:16:41.498328 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:41.498440 | orchestrator | 2026-02-04 00:16:41.498457 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-04 00:16:41.573439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-04 00:16:41.573525 | orchestrator | 2026-02-04 00:16:41.573540 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-04 00:16:42.078601 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:42.078730 | orchestrator | 2026-02-04 00:16:42.078749 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-04 00:16:42.481322 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:42.481415 | orchestrator | 2026-02-04 00:16:42.481432 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-04 00:16:43.686408 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-04 00:16:43.686499 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-04 00:16:43.686515 | orchestrator | 2026-02-04 00:16:43.686528 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-04 00:16:44.347565 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:44.347672 | orchestrator | 2026-02-04 00:16:44.347720 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-04 00:16:44.735240 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:44.735340 | orchestrator | 2026-02-04 00:16:44.735362 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-04 00:16:45.100008 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:45.100123 | orchestrator | 2026-02-04 00:16:45.100142 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-04 00:16:45.147675 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:45.147794 | orchestrator | 2026-02-04 00:16:45.147809 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-04 00:16:45.206265 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-04 00:16:45.206416 | orchestrator | 2026-02-04 00:16:45.206442 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-04 00:16:45.249923 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:45.250070 | orchestrator | 2026-02-04 00:16:45.250093 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-04 00:16:47.241889 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-04 00:16:47.241979 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-04 00:16:47.241995 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-04 00:16:47.242007 | orchestrator | 2026-02-04 00:16:47.242069 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-04 00:16:47.941757 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:47.941850 | orchestrator | 2026-02-04 00:16:47.941867 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-04 00:16:48.638644 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:48.638762 | orchestrator | 2026-02-04 00:16:48.638779 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-04 00:16:49.350459 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:49.350582 | orchestrator | 2026-02-04 00:16:49.350601 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-04 00:16:49.416248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-04 00:16:49.416353 | orchestrator | 2026-02-04 00:16:49.416373 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-04 00:16:49.462277 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:49.462363 | orchestrator | 2026-02-04 00:16:49.462378 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-04 00:16:50.148380 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-04 00:16:50.148486 | orchestrator | 2026-02-04 00:16:50.148507 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-04 00:16:50.232259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-04 00:16:50.232354 | orchestrator | 2026-02-04 00:16:50.232370 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-04 00:16:50.917656 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:50.917766 | orchestrator | 2026-02-04 00:16:50.917782 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-04 00:16:51.503711 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:51.503805 | orchestrator | 2026-02-04 00:16:51.503822 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-04 00:16:51.560085 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:16:51.560227 | orchestrator | 2026-02-04 00:16:51.560250 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-04 00:16:51.629045 | orchestrator | ok: [testbed-manager] 2026-02-04 00:16:51.629141 | orchestrator | 2026-02-04 00:16:51.629162 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-04 00:16:52.446478 | orchestrator | changed: [testbed-manager] 2026-02-04 00:16:52.446595 | orchestrator | 2026-02-04 00:16:52.446614 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-04 00:17:54.593389 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:54.593515 | orchestrator | 2026-02-04 00:17:54.593559 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-04 00:17:55.519201 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:55.519364 | orchestrator | 2026-02-04 00:17:55.519386 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-04 00:17:55.576501 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:17:55.576678 | orchestrator | 2026-02-04 00:17:55.576715 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-04 00:17:57.925983 | orchestrator | changed: [testbed-manager] 2026-02-04 00:17:57.926146 | orchestrator | 2026-02-04 00:17:57.926166 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-04 00:17:57.986748 | orchestrator | ok: [testbed-manager] 2026-02-04 00:17:57.986878 | orchestrator | 2026-02-04 00:17:57.986901 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 00:17:57.986914 | orchestrator | 2026-02-04 00:17:57.986926 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-04 00:17:58.095334 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:17:58.095435 | orchestrator | 2026-02-04 00:17:58.095453 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-04 00:18:58.145950 | orchestrator | Pausing for 60 seconds 2026-02-04 00:18:58.146130 | orchestrator | changed: [testbed-manager] 2026-02-04 00:18:58.146151 | orchestrator | 2026-02-04 00:18:58.146165 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-04 00:19:00.711953 | orchestrator | changed: [testbed-manager] 2026-02-04 00:19:00.712035 | orchestrator | 2026-02-04 00:19:00.712047 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-04 00:19:42.187372 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-04 00:19:42.187515 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-04 00:19:42.187575 | orchestrator | changed: [testbed-manager] 2026-02-04 00:19:42.187599 | orchestrator | 2026-02-04 00:19:42.187649 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-04 00:19:52.320935 | orchestrator | changed: [testbed-manager] 2026-02-04 00:19:52.321065 | orchestrator | 2026-02-04 00:19:52.321096 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-04 00:19:52.401020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-04 00:19:52.401130 | orchestrator | 2026-02-04 00:19:52.401154 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 00:19:52.401173 | orchestrator | 2026-02-04 00:19:52.401194 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-04 00:19:52.443629 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:19:52.443773 | orchestrator | 2026-02-04 00:19:52.443801 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-04 00:19:52.503628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-04 00:19:52.503733 | orchestrator | 2026-02-04 00:19:52.503752 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-04 00:19:53.234198 | orchestrator | changed: [testbed-manager] 2026-02-04 00:19:53.234279 | orchestrator | 2026-02-04 00:19:53.234290 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-04 00:19:56.258308 | orchestrator | ok: [testbed-manager] 2026-02-04 00:19:56.258435 | orchestrator | 2026-02-04 00:19:56.258454 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-04 00:19:56.310852 | orchestrator | ok: [testbed-manager] => { 2026-02-04 00:19:56.310950 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-04 00:19:56.310966 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-04 00:19:56.310978 | orchestrator | "Checking running containers against expected versions...", 2026-02-04 00:19:56.310990 | orchestrator | "", 2026-02-04 00:19:56.311002 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-04 00:19:56.311013 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-04 00:19:56.311025 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311037 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-04 00:19:56.311048 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311059 | orchestrator | "", 2026-02-04 00:19:56.311070 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-04 00:19:56.311082 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-04 00:19:56.311093 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311131 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-04 00:19:56.311143 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311154 | orchestrator | "", 2026-02-04 00:19:56.311165 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-04 00:19:56.311176 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-04 00:19:56.311189 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311209 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-04 00:19:56.311227 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311245 | orchestrator | "", 2026-02-04 00:19:56.311264 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-04 00:19:56.311282 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-04 00:19:56.311301 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311318 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-04 00:19:56.311336 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311356 | orchestrator | "", 2026-02-04 00:19:56.311376 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-04 00:19:56.311399 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-04 00:19:56.311420 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311441 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-04 00:19:56.311460 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311480 | orchestrator | "", 2026-02-04 00:19:56.311500 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-04 00:19:56.311544 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.311566 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311586 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.311607 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311629 | orchestrator | "", 2026-02-04 00:19:56.311649 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-04 00:19:56.311667 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 00:19:56.311681 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311694 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 00:19:56.311707 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311720 | orchestrator | "", 2026-02-04 00:19:56.311733 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-04 00:19:56.311745 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 00:19:56.311757 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311768 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 00:19:56.311778 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311789 | orchestrator | "", 2026-02-04 00:19:56.311800 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-04 00:19:56.311811 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-04 00:19:56.311822 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311832 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-04 00:19:56.311843 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311854 | orchestrator | "", 2026-02-04 00:19:56.311865 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-04 00:19:56.311875 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 00:19:56.311886 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.311897 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 00:19:56.311908 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.311992 | orchestrator | "", 2026-02-04 00:19:56.312005 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-04 00:19:56.312016 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312026 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.312050 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312061 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.312071 | orchestrator | "", 2026-02-04 00:19:56.312082 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-04 00:19:56.312093 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312103 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.312114 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312125 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.312136 | orchestrator | "", 2026-02-04 00:19:56.312147 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-04 00:19:56.312158 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312169 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.312180 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312191 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.312202 | orchestrator | "", 2026-02-04 00:19:56.312212 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-04 00:19:56.312223 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312234 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.312245 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312275 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.312287 | orchestrator | "", 2026-02-04 00:19:56.312298 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-04 00:19:56.312309 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312320 | orchestrator | " Enabled: true", 2026-02-04 00:19:56.312340 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 00:19:56.312351 | orchestrator | " Status: ✅ MATCH", 2026-02-04 00:19:56.312362 | orchestrator | "", 2026-02-04 00:19:56.312373 | orchestrator | "=== Summary ===", 2026-02-04 00:19:56.312384 | orchestrator | "Errors (version mismatches): 0", 2026-02-04 00:19:56.312395 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-04 00:19:56.312406 | orchestrator | "", 2026-02-04 00:19:56.312417 | orchestrator | "✅ All running containers match expected versions!" 2026-02-04 00:19:56.312428 | orchestrator | ] 2026-02-04 00:19:56.312439 | orchestrator | } 2026-02-04 00:19:56.312450 | orchestrator | 2026-02-04 00:19:56.312461 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-04 00:19:56.361444 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:19:56.361593 | orchestrator | 2026-02-04 00:19:56.361613 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:19:56.361627 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-04 00:19:56.361638 | orchestrator | 2026-02-04 00:19:56.458777 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 00:19:56.458884 | orchestrator | + deactivate 2026-02-04 00:19:56.458918 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 00:19:56.458933 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 00:19:56.458944 | orchestrator | + export PATH 2026-02-04 00:19:56.458956 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 00:19:56.458968 | orchestrator | + '[' -n '' ']' 2026-02-04 00:19:56.458980 | orchestrator | + hash -r 2026-02-04 00:19:56.458991 | orchestrator | + '[' -n '' ']' 2026-02-04 00:19:56.459001 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 00:19:56.459012 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 00:19:56.459023 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 00:19:56.459034 | orchestrator | + unset -f deactivate 2026-02-04 00:19:56.459046 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-04 00:19:56.467186 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 00:19:56.467246 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-04 00:19:56.467260 | orchestrator | + local max_attempts=60 2026-02-04 00:19:56.467273 | orchestrator | + local name=ceph-ansible 2026-02-04 00:19:56.467311 | orchestrator | + local attempt_num=1 2026-02-04 00:19:56.467952 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:19:56.494690 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:19:56.494765 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 00:19:56.494780 | orchestrator | + local max_attempts=60 2026-02-04 00:19:56.494793 | orchestrator | + local name=kolla-ansible 2026-02-04 00:19:56.494805 | orchestrator | + local attempt_num=1 2026-02-04 00:19:56.495229 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 00:19:56.523197 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:19:56.523266 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 00:19:56.523279 | orchestrator | + local max_attempts=60 2026-02-04 00:19:56.523291 | orchestrator | + local name=osism-ansible 2026-02-04 00:19:56.523302 | orchestrator | + local attempt_num=1 2026-02-04 00:19:56.523731 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 00:19:56.548456 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:19:56.548551 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 00:19:56.548569 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-04 00:19:57.221675 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-04 00:19:57.379574 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-04 00:19:57.379678 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379694 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379706 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-04 00:19:57.379718 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-02-04 00:19:57.379750 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379762 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379773 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 56 seconds (healthy) 2026-02-04 00:19:57.379784 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379794 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-02-04 00:19:57.379806 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379817 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-02-04 00:19:57.379828 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379864 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-04 00:19:57.379876 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.379888 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-02-04 00:19:57.385496 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 00:19:57.426443 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 00:19:57.426596 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-04 00:19:57.429314 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-04 00:20:09.600808 | orchestrator | 2026-02-04 00:20:09 | INFO  | Task af5dabd3-a84e-46af-8f73-c8925c2605fe (resolvconf) was prepared for execution. 2026-02-04 00:20:09.600893 | orchestrator | 2026-02-04 00:20:09 | INFO  | It takes a moment until task af5dabd3-a84e-46af-8f73-c8925c2605fe (resolvconf) has been started and output is visible here. 2026-02-04 00:20:23.761409 | orchestrator | 2026-02-04 00:20:23.761542 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-04 00:20:23.761560 | orchestrator | 2026-02-04 00:20:23.761572 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:20:23.761584 | orchestrator | Wednesday 04 February 2026 00:20:13 +0000 (0:00:00.135) 0:00:00.135 **** 2026-02-04 00:20:23.761595 | orchestrator | ok: [testbed-manager] 2026-02-04 00:20:23.761607 | orchestrator | 2026-02-04 00:20:23.761618 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-04 00:20:23.761631 | orchestrator | Wednesday 04 February 2026 00:20:18 +0000 (0:00:04.474) 0:00:04.610 **** 2026-02-04 00:20:23.761642 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:20:23.761654 | orchestrator | 2026-02-04 00:20:23.761665 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-04 00:20:23.761676 | orchestrator | Wednesday 04 February 2026 00:20:18 +0000 (0:00:00.053) 0:00:04.663 **** 2026-02-04 00:20:23.761687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-04 00:20:23.761699 | orchestrator | 2026-02-04 00:20:23.761711 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-04 00:20:23.761722 | orchestrator | Wednesday 04 February 2026 00:20:18 +0000 (0:00:00.067) 0:00:04.731 **** 2026-02-04 00:20:23.761748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:20:23.761761 | orchestrator | 2026-02-04 00:20:23.761772 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-04 00:20:23.761783 | orchestrator | Wednesday 04 February 2026 00:20:18 +0000 (0:00:00.069) 0:00:04.801 **** 2026-02-04 00:20:23.761794 | orchestrator | ok: [testbed-manager] 2026-02-04 00:20:23.761805 | orchestrator | 2026-02-04 00:20:23.761836 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-04 00:20:23.761847 | orchestrator | Wednesday 04 February 2026 00:20:19 +0000 (0:00:01.044) 0:00:05.845 **** 2026-02-04 00:20:23.761859 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:20:23.761870 | orchestrator | 2026-02-04 00:20:23.761881 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-04 00:20:23.761892 | orchestrator | Wednesday 04 February 2026 00:20:19 +0000 (0:00:00.065) 0:00:05.910 **** 2026-02-04 00:20:23.761903 | orchestrator | ok: [testbed-manager] 2026-02-04 00:20:23.761936 | orchestrator | 2026-02-04 00:20:23.761949 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-04 00:20:23.761962 | orchestrator | Wednesday 04 February 2026 00:20:19 +0000 (0:00:00.489) 0:00:06.400 **** 2026-02-04 00:20:23.761976 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:20:23.761988 | orchestrator | 2026-02-04 00:20:23.762000 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-04 00:20:23.762014 | orchestrator | Wednesday 04 February 2026 00:20:19 +0000 (0:00:00.069) 0:00:06.470 **** 2026-02-04 00:20:23.762070 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:23.762082 | orchestrator | 2026-02-04 00:20:23.762095 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-04 00:20:23.762107 | orchestrator | Wednesday 04 February 2026 00:20:20 +0000 (0:00:00.488) 0:00:06.959 **** 2026-02-04 00:20:23.762120 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:23.762132 | orchestrator | 2026-02-04 00:20:23.762144 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-04 00:20:23.762157 | orchestrator | Wednesday 04 February 2026 00:20:21 +0000 (0:00:01.038) 0:00:07.997 **** 2026-02-04 00:20:23.762170 | orchestrator | ok: [testbed-manager] 2026-02-04 00:20:23.762183 | orchestrator | 2026-02-04 00:20:23.762196 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-04 00:20:23.762209 | orchestrator | Wednesday 04 February 2026 00:20:22 +0000 (0:00:00.929) 0:00:08.927 **** 2026-02-04 00:20:23.762221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-04 00:20:23.762234 | orchestrator | 2026-02-04 00:20:23.762246 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-04 00:20:23.762259 | orchestrator | Wednesday 04 February 2026 00:20:22 +0000 (0:00:00.074) 0:00:09.001 **** 2026-02-04 00:20:23.762271 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:23.762283 | orchestrator | 2026-02-04 00:20:23.762296 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:20:23.762310 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:20:23.762323 | orchestrator | 2026-02-04 00:20:23.762335 | orchestrator | 2026-02-04 00:20:23.762346 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:20:23.762357 | orchestrator | Wednesday 04 February 2026 00:20:23 +0000 (0:00:01.141) 0:00:10.143 **** 2026-02-04 00:20:23.762367 | orchestrator | =============================================================================== 2026-02-04 00:20:23.762378 | orchestrator | Gathering Facts --------------------------------------------------------- 4.47s 2026-02-04 00:20:23.762388 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2026-02-04 00:20:23.762399 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2026-02-04 00:20:23.762410 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-02-04 00:20:23.762420 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.93s 2026-02-04 00:20:23.762431 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-02-04 00:20:23.762460 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.49s 2026-02-04 00:20:23.762472 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2026-02-04 00:20:23.762483 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2026-02-04 00:20:23.762493 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-02-04 00:20:23.762534 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2026-02-04 00:20:23.762546 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-04 00:20:23.762565 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2026-02-04 00:20:24.025184 | orchestrator | + osism apply sshconfig 2026-02-04 00:20:35.940380 | orchestrator | 2026-02-04 00:20:35 | INFO  | Task 623d9750-a4a7-4647-8302-f0915ba55539 (sshconfig) was prepared for execution. 2026-02-04 00:20:35.940641 | orchestrator | 2026-02-04 00:20:35 | INFO  | It takes a moment until task 623d9750-a4a7-4647-8302-f0915ba55539 (sshconfig) has been started and output is visible here. 2026-02-04 00:20:46.124019 | orchestrator | 2026-02-04 00:20:46.124103 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-04 00:20:46.124111 | orchestrator | 2026-02-04 00:20:46.124129 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-04 00:20:46.124134 | orchestrator | Wednesday 04 February 2026 00:20:39 +0000 (0:00:00.117) 0:00:00.117 **** 2026-02-04 00:20:46.124139 | orchestrator | ok: [testbed-manager] 2026-02-04 00:20:46.124151 | orchestrator | 2026-02-04 00:20:46.124155 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-04 00:20:46.124160 | orchestrator | Wednesday 04 February 2026 00:20:40 +0000 (0:00:00.510) 0:00:00.628 **** 2026-02-04 00:20:46.124164 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:46.124170 | orchestrator | 2026-02-04 00:20:46.124174 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-04 00:20:46.124178 | orchestrator | Wednesday 04 February 2026 00:20:40 +0000 (0:00:00.421) 0:00:01.049 **** 2026-02-04 00:20:46.124183 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-04 00:20:46.124187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-04 00:20:46.124191 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-04 00:20:46.124196 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-04 00:20:46.124200 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-04 00:20:46.124204 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-04 00:20:46.124208 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-04 00:20:46.124212 | orchestrator | 2026-02-04 00:20:46.124216 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-04 00:20:46.124220 | orchestrator | Wednesday 04 February 2026 00:20:45 +0000 (0:00:04.950) 0:00:06.000 **** 2026-02-04 00:20:46.124224 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:20:46.124228 | orchestrator | 2026-02-04 00:20:46.124232 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-04 00:20:46.124236 | orchestrator | Wednesday 04 February 2026 00:20:45 +0000 (0:00:00.066) 0:00:06.066 **** 2026-02-04 00:20:46.124240 | orchestrator | changed: [testbed-manager] 2026-02-04 00:20:46.124244 | orchestrator | 2026-02-04 00:20:46.124248 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:20:46.124253 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:20:46.124258 | orchestrator | 2026-02-04 00:20:46.124263 | orchestrator | 2026-02-04 00:20:46.124267 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:20:46.124271 | orchestrator | Wednesday 04 February 2026 00:20:45 +0000 (0:00:00.454) 0:00:06.521 **** 2026-02-04 00:20:46.124275 | orchestrator | =============================================================================== 2026-02-04 00:20:46.124279 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.95s 2026-02-04 00:20:46.124283 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2026-02-04 00:20:46.124287 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.45s 2026-02-04 00:20:46.124291 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.42s 2026-02-04 00:20:46.124295 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-02-04 00:20:46.311902 | orchestrator | + osism apply known-hosts 2026-02-04 00:20:58.090954 | orchestrator | 2026-02-04 00:20:58 | INFO  | Task 10bd9bc2-884a-4af6-933b-b354bcd7ff01 (known-hosts) was prepared for execution. 2026-02-04 00:20:58.091062 | orchestrator | 2026-02-04 00:20:58 | INFO  | It takes a moment until task 10bd9bc2-884a-4af6-933b-b354bcd7ff01 (known-hosts) has been started and output is visible here. 2026-02-04 00:21:13.433686 | orchestrator | 2026-02-04 00:21:13.433815 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-04 00:21:13.433834 | orchestrator | 2026-02-04 00:21:13.433846 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-04 00:21:13.433859 | orchestrator | Wednesday 04 February 2026 00:21:02 +0000 (0:00:00.122) 0:00:00.122 **** 2026-02-04 00:21:13.433871 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-04 00:21:13.433883 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-04 00:21:13.433894 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-04 00:21:13.433905 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-04 00:21:13.433916 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 00:21:13.433927 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 00:21:13.433938 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 00:21:13.433949 | orchestrator | 2026-02-04 00:21:13.433960 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-04 00:21:13.433972 | orchestrator | Wednesday 04 February 2026 00:21:07 +0000 (0:00:05.631) 0:00:05.753 **** 2026-02-04 00:21:13.433984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-04 00:21:13.433998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-04 00:21:13.434009 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-04 00:21:13.434090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-04 00:21:13.434111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-04 00:21:13.434137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-04 00:21:13.434149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-04 00:21:13.434160 | orchestrator | 2026-02-04 00:21:13.434171 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:13.434182 | orchestrator | Wednesday 04 February 2026 00:21:07 +0000 (0:00:00.146) 0:00:05.899 **** 2026-02-04 00:21:13.434197 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJDaKYCkd2wla6n4JmeDDoIyB2nj4oxymlXnqcvIIzpFwMTObOVWBm2IRXUQXGRz2tMebbLt8XlA2GhtCo59qPc=) 2026-02-04 00:21:13.434220 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVBQv79dEU4QVdEsmXt34ArpaKo0EL5WifF2Ys+EgWdKygoHcBoPA6QE5lkpR5UA1a6Xepn2ZTQFYiquXJO9YJWU0/QE6ZTmLxA8QfuPY/Q9zg2gXN1WMOLe6Yt8jAPagfets7IE8IGuuW/Mxt65OakH6lHYdaH/tTP8iaHbZQit71fx1O3HVAIeQ5AU/FebNM6CWxbadC0DfyfkE4bivetWAmXme2qaPzN1VcMJ15EnvowmrTiPTiU48duN8gkd/HaMDoZjO1hHajNbnPMvV7xo2gwDZZVy59gvxaB69Lt7TmT1gsWOEWXLoGnTLVA7a+kifi+AjWrw4ZUwk6T162OH5EK2/vgAOLB+5YoRJL5iwuAeeS7gGZeO547n6L1irfsRbTaXVYup9POe1hlFGOSVus+Y8aGXQqW8oIoGyhZxNLHoT0wG9ohSHq7mbkdOWqsw9h/gl6i+ZMOfEphwCy+nInXOqD5+smfF0H6fzs1+wdp6GqjE77Qz+8VEwHCQM=) 2026-02-04 00:21:13.434261 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOzUFVQonPplQ3JbLl1MnspmWRCQGiVW5M/HfNfGiuAf) 2026-02-04 00:21:13.434275 | orchestrator | 2026-02-04 00:21:13.434286 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:13.434297 | orchestrator | Wednesday 04 February 2026 00:21:08 +0000 (0:00:01.030) 0:00:06.930 **** 2026-02-04 00:21:13.434308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFqHKhiDVScLllH2oUXCtP2DP/eVyxcjuW9rq29lDuWn) 2026-02-04 00:21:13.434352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHAyjDeDFbGqjUi2lv0FSvjRLg9MQMmIjrEtZJrf9KPD8hyUHJtnieOBZa5IEswbjrAH6XL68l3p1VZUgOipQn82MzJc5ZXi9vMct3UQPenxiHOs+3hfj78IdB+3uuibvBGhV1TpuZYxgB/+WkLsKTv1AVwfNnPkyEVpppwU8xowt4YIkjn9NYDKVyWE04KxbHAu/OcjfvaYyfnL/dkMJVfKJhJnuArdhel84J3M2X/g0TL7JTXAbNmpT119anfegFGrZ9YHGyH37VtJ+VqtG3X4QvD9zKkxl+/mb3/oqPs3FifhUcY0tRgjvlhwgerH842CiiO15irPxBthTFaGhqYl3jn32GzxYmKg0BKkCo/pvRGdJ0AMf+0aZ+zRf6NSWxz6Ef0Kth1j6mtHr29ItOT76742QBPvucKR9HuMKS3rlxfKJWWpZSpm+DGpueNZlnzLGWed0vncZjIPD6Kt9CpEOnLfl7s68Qy8HwYpvVdDafcJT83x+5QXpyVmQ1wdc=) 2026-02-04 00:21:13.434365 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXnxG32Qu3HZnN//fS8aXg/9o+rgkedtSkqERvTEhmxeeqyRhxlsiE1EeFcGIQTCU2yDOKvZ1Dtl8BK6GKq8eI=) 2026-02-04 00:21:13.434376 | orchestrator | 2026-02-04 00:21:13.434388 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:13.434399 | orchestrator | Wednesday 04 February 2026 00:21:09 +0000 (0:00:00.956) 0:00:07.886 **** 2026-02-04 00:21:13.434410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZJ6wh5rnEQ/sciBni9WiI31e9sDnec+S4CXrM9VcCDmetojlQhLNgHAQ0aYm7/aZLCx0ApijpQMBo8DcQOA4TJy3Cy5A+j1JPS39D+3j5Z0iwkpbjZ0On1L/cW3IiCAohoir+IjPLGrw27S74dlbun6P8TtmTipUiiWo7nqRtn1nwB+KTpc4H4mhbY2MUOk5W2sEeBrv/ZfGJHe4tmM9e/GZak7Si87y6bystA3KgfM/e8o1Skc6Jaq8IKMVj80QnGylxTjovNORVa9ljK+SJ+Vj4BXAOGsSqnkjjmbh4jWjBiCRR0LJlgVTRILjtYTLWXCUcO3tfXnJFXXRWmwVtuPBCJdIjhV0FQH/oXyGA9kYV2OvZJbeX5Vm0PmK9sGv8B9GyaBqhUaLtgsKWwIMu+mxpxZNRp1Ddk1hBN8mnTdIXQHB/nsOubLQlKqWAwbsBoiEGi8zFEPy5CxnYgVFKRM7CrqLbuocsETHlGEq1llU71H0VwLkc3XyzPduakqM=) 2026-02-04 00:21:13.434422 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9v2wjZc2FC3gg9WrXKf0ZtJPgJ6tLamPBinIxPpUDdnedFhz08pWCjK6oNNngPU+8/ZcHKkRn7e/qB4/cL390=) 2026-02-04 00:21:13.434434 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHQxsxQHdViYBvgibe6Whjo2Ofyg+Sj3y+VGsYy/in8o) 2026-02-04 00:21:13.434445 | orchestrator | 2026-02-04 00:21:13.434456 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:13.434496 | orchestrator | Wednesday 04 February 2026 00:21:10 +0000 (0:00:00.948) 0:00:08.835 **** 2026-02-04 00:21:13.434507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILsbCLhwFYAZvKvyuZXVhl0IeY2FzHZZkzAA8hRV/rR+) 2026-02-04 00:21:13.434519 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC/p4lpWu7kBcCttQO/2Mi5iEGci0LjoCLAJZds8ljJ0GNW58v2Dm6qZJqUWUYv1tXJcOefqCc9panFLDRTZN5FOphXVVSJnxjeEw2uqKImBtWlyvdfz2cGYAPXYC3929VhEG0secsEvMvD69Nye7DPP4K6wTFYjasCvKOLCFKM20z7hDZ93yUa6PgvN5rStNBNlVYPfmGMKrspuCwtZymRKnHZj42XeApNXNy4wRMrsY9twdYF0L4MZcTaUgjYiTUiAgZRAm4k5b7JVRe8rHz2iw5UAaxbKBFyf8oiO5ooSAlMSGAtbHRBMu0VQsbgBN34+AR/kSvKJNsSCr6ytPR2ETDeM4cOdtwgBV0Qk4s5b5TecWCWtUa/WQ+MGcQIa2XNU8g6affXPGOwsmt/UWPgEv61v2jlxVEBc0HeFjBQvcYqxid23zAj5U0zhLsG52wN0qZLlCgXmwGp4yf2GDbyFaaC+MABXjL0dJ+JpLttWWDi1aUrct++IZ+IvLEOPc=) 2026-02-04 00:21:13.434539 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzY23N5Oy8c//cqVee5phzfflZdbkd8+itI9Q8fyUF1qQ/8eI2JLN/fRIILTx2067+CScS9XL/kdGrt+klfFfE=) 2026-02-04 00:21:13.434572 | orchestrator | 2026-02-04 00:21:13.434584 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:13.434595 | orchestrator | Wednesday 04 February 2026 00:21:11 +0000 (0:00:00.927) 0:00:09.762 **** 2026-02-04 00:21:13.434683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLUjTkjGDEkSR1j/ahs+Bh2kljLUQkvzqxpy6rlxrjFPlQTS84peJ98cDy7K71a2ij9h9ZxU9jpxicWHF90lUjY=) 2026-02-04 00:21:13.434695 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8iFTpyXpXjvSXhqVphCV1167PsXMGCKahIQkytYppO) 2026-02-04 00:21:13.434707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPiMthc21//QoVvfO22LJa6ja7jklwi4WCRxdU3DjLaF8jc4lAhks8niresvJ5k8dEF+zyJb6Cp00+mv+Y65YDcc4ViPycDy9VcW2MBpqu1UzNl+SSADMYmu6ocDPwfljXD5DtmT+R8H1CeEK2MsB89M8pOYxPy50+RkiRPe53mPd90ebXiG0o/dwTosrop7aDjIgimDAxsdGgB2w5iTxAmeW2+SC+BVt1f3a2BQld2Tp63ehzMmDCzIsw/M5KkBOKjnTv4FHyo94zGPk1AAghrxkuMQicETV1Di6Wp1lussOMUEh4ZRcF+LZfFJtVGJcU9gvT+zkS2W0+W/+gYZUxbdE3qfPTDt29C6naxw6Dv4ZZaeCmp12dGdGo+62P/DOKA36oe5oGX1GKktUwVl+sntFVOx889qiqhrQGZraGm3sntXjtWbLgcWj6czbL4+QR5wvHljahxG/wQbhkskPdMUzVAj4Snq7s1EGzfPIOxdwNKulcF84Ky3yc+FPtzYE=) 2026-02-04 00:21:13.434719 | orchestrator | 2026-02-04 00:21:13.434730 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:13.434741 | orchestrator | Wednesday 04 February 2026 00:21:12 +0000 (0:00:00.893) 0:00:10.656 **** 2026-02-04 00:21:13.434759 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOz8hqV8V6CjvYFGtG2UdLwpXGoIVfNkc8yawRmp2/Y7p8GOA9TLtwP/1ktyvykkbKAsFKVN6IZVnq7kg21ISnU=) 2026-02-04 00:21:23.221803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+Vl6ZAoMQv2jkz3/kgTbVCsTOJwrr6t0/LikVnUuviGzylmuoqx3Vi1zD/98nXSj4MXlazU0ESYuyjJQ1ULFTEQG8OjDh51Dmmn1MtexCTBKobHZ12+5B5GUfZzY11Oh8NM5VBWNR2whH1joNdoprfQMxL423BsLpa/5ZHDYavACTt6dkalf1xPjX6Xr2NYKk6X8MVOfpU2Pz/Q+fNq+k572e6tMxHVIpu82x7ey1OQwF2GvZHJ4mdFl9JDLfeT3esm/Ct21F2qTDt3QwNHokdsJ01Zi5lwOzSCWeOdgnsFpuqX56ZTBpCIcIZg3+4JLNpkox6ejMgt4ofZzi58UXhRRXy+oX3UXO4bnNH7Afjurd5B578jCQYvGTYh7n2cS+AqqUM1lJFo51a+xqc46QOwYTdaE2a86Q5lLQZdbObcvTYFSJydhnzKjuiqyvPULWR6UcS/mk27ZQ94oZo5X5XHGpvQ08MQMeBPsUMwiKm/9Tlnz6ROf/jmkiKGuyiFE=) 2026-02-04 00:21:23.221915 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6hK2PBgp5qn/b5kYGimXeuJMkaeUmWPEru7DBPVeCu) 2026-02-04 00:21:23.221934 | orchestrator | 2026-02-04 00:21:23.221948 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:23.221960 | orchestrator | Wednesday 04 February 2026 00:21:13 +0000 (0:00:00.835) 0:00:11.492 **** 2026-02-04 00:21:23.221973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRb/WJ09Nr8RJE7rBL8d6RJwSpQkZ6pWVzYvybZ85hz5vnfK53l5K8V9+2JPEmUc/OqOeGNLbCcW4B8dNVPP8t2+0YiZaEGi1cmTH3xMl+rG/XcM3EPGjPR2K15KqhD4GoN17m7XC+EgSXwVaRIeHLx4OxHspFU6Y7vb7YJtsv2MoDMGgGj/DuxE8H4eZtMOKvFDfJDH9oyJujlKaAjlZ7Fb6sPZGlpCHdvwJKus7s0bm355SdQtd+gT0sWmBvMVGIlYZvPQ4cSJs9Aj587mTtSobmoyRH+Tqoa9hSg/Q6BrReFLkWO5kpj/TPKnq177h059lHdouNQqjZveQOm763pcxzw4+xfxmMfvYeOpBXa0YUkCeXMh4HLnYo88U6ZzwIx3d58mSylJ6YXZJdYxvBKZVa5HEKV+E646muhEIjrbVhi2DjwE3LtYVkG/3cBCSb+DudcB7gWD5yHbYOf4Mp+mFAT7uB4oNFU1fq8KuSjVPC7U4E+azc9IcYT6397UU=) 2026-02-04 00:21:23.221986 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDIaOjZJzi1ZpOFo5xvT+jXGXORbxeSLsJDjmZygcG+R45Gl/KzGCEolIrAaaarOW4jI24VRwvrwOhHxHilQNAg=) 2026-02-04 00:21:23.222079 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIELOiVZiPxwOTQMK5RElWYCyBUvg7dnnoHddmRr3g7NV) 2026-02-04 00:21:23.222093 | orchestrator | 2026-02-04 00:21:23.222105 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-04 00:21:23.222117 | orchestrator | Wednesday 04 February 2026 00:21:14 +0000 (0:00:00.885) 0:00:12.377 **** 2026-02-04 00:21:23.222128 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-04 00:21:23.222140 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-04 00:21:23.222150 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-04 00:21:23.222161 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-04 00:21:23.222172 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 00:21:23.222182 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 00:21:23.222193 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 00:21:23.222204 | orchestrator | 2026-02-04 00:21:23.222215 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-04 00:21:23.222226 | orchestrator | Wednesday 04 February 2026 00:21:19 +0000 (0:00:04.748) 0:00:17.125 **** 2026-02-04 00:21:23.222238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-04 00:21:23.222250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-04 00:21:23.222261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-04 00:21:23.222272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-04 00:21:23.222283 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-04 00:21:23.222294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-04 00:21:23.222305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-04 00:21:23.222315 | orchestrator | 2026-02-04 00:21:23.222343 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:23.222357 | orchestrator | Wednesday 04 February 2026 00:21:19 +0000 (0:00:00.183) 0:00:17.309 **** 2026-02-04 00:21:23.222370 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOzUFVQonPplQ3JbLl1MnspmWRCQGiVW5M/HfNfGiuAf) 2026-02-04 00:21:23.222404 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCVBQv79dEU4QVdEsmXt34ArpaKo0EL5WifF2Ys+EgWdKygoHcBoPA6QE5lkpR5UA1a6Xepn2ZTQFYiquXJO9YJWU0/QE6ZTmLxA8QfuPY/Q9zg2gXN1WMOLe6Yt8jAPagfets7IE8IGuuW/Mxt65OakH6lHYdaH/tTP8iaHbZQit71fx1O3HVAIeQ5AU/FebNM6CWxbadC0DfyfkE4bivetWAmXme2qaPzN1VcMJ15EnvowmrTiPTiU48duN8gkd/HaMDoZjO1hHajNbnPMvV7xo2gwDZZVy59gvxaB69Lt7TmT1gsWOEWXLoGnTLVA7a+kifi+AjWrw4ZUwk6T162OH5EK2/vgAOLB+5YoRJL5iwuAeeS7gGZeO547n6L1irfsRbTaXVYup9POe1hlFGOSVus+Y8aGXQqW8oIoGyhZxNLHoT0wG9ohSHq7mbkdOWqsw9h/gl6i+ZMOfEphwCy+nInXOqD5+smfF0H6fzs1+wdp6GqjE77Qz+8VEwHCQM=) 2026-02-04 00:21:23.222419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJDaKYCkd2wla6n4JmeDDoIyB2nj4oxymlXnqcvIIzpFwMTObOVWBm2IRXUQXGRz2tMebbLt8XlA2GhtCo59qPc=) 2026-02-04 00:21:23.222440 | orchestrator | 2026-02-04 00:21:23.222453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:23.222490 | orchestrator | Wednesday 04 February 2026 00:21:20 +0000 (0:00:00.963) 0:00:18.273 **** 2026-02-04 00:21:23.222508 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHAyjDeDFbGqjUi2lv0FSvjRLg9MQMmIjrEtZJrf9KPD8hyUHJtnieOBZa5IEswbjrAH6XL68l3p1VZUgOipQn82MzJc5ZXi9vMct3UQPenxiHOs+3hfj78IdB+3uuibvBGhV1TpuZYxgB/+WkLsKTv1AVwfNnPkyEVpppwU8xowt4YIkjn9NYDKVyWE04KxbHAu/OcjfvaYyfnL/dkMJVfKJhJnuArdhel84J3M2X/g0TL7JTXAbNmpT119anfegFGrZ9YHGyH37VtJ+VqtG3X4QvD9zKkxl+/mb3/oqPs3FifhUcY0tRgjvlhwgerH842CiiO15irPxBthTFaGhqYl3jn32GzxYmKg0BKkCo/pvRGdJ0AMf+0aZ+zRf6NSWxz6Ef0Kth1j6mtHr29ItOT76742QBPvucKR9HuMKS3rlxfKJWWpZSpm+DGpueNZlnzLGWed0vncZjIPD6Kt9CpEOnLfl7s68Qy8HwYpvVdDafcJT83x+5QXpyVmQ1wdc=) 2026-02-04 00:21:23.222522 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFXnxG32Qu3HZnN//fS8aXg/9o+rgkedtSkqERvTEhmxeeqyRhxlsiE1EeFcGIQTCU2yDOKvZ1Dtl8BK6GKq8eI=) 2026-02-04 00:21:23.222535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFqHKhiDVScLllH2oUXCtP2DP/eVyxcjuW9rq29lDuWn) 2026-02-04 00:21:23.222547 | orchestrator | 2026-02-04 00:21:23.222560 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:23.222573 | orchestrator | Wednesday 04 February 2026 00:21:21 +0000 (0:00:00.918) 0:00:19.191 **** 2026-02-04 00:21:23.222586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHQxsxQHdViYBvgibe6Whjo2Ofyg+Sj3y+VGsYy/in8o) 2026-02-04 00:21:23.222599 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZJ6wh5rnEQ/sciBni9WiI31e9sDnec+S4CXrM9VcCDmetojlQhLNgHAQ0aYm7/aZLCx0ApijpQMBo8DcQOA4TJy3Cy5A+j1JPS39D+3j5Z0iwkpbjZ0On1L/cW3IiCAohoir+IjPLGrw27S74dlbun6P8TtmTipUiiWo7nqRtn1nwB+KTpc4H4mhbY2MUOk5W2sEeBrv/ZfGJHe4tmM9e/GZak7Si87y6bystA3KgfM/e8o1Skc6Jaq8IKMVj80QnGylxTjovNORVa9ljK+SJ+Vj4BXAOGsSqnkjjmbh4jWjBiCRR0LJlgVTRILjtYTLWXCUcO3tfXnJFXXRWmwVtuPBCJdIjhV0FQH/oXyGA9kYV2OvZJbeX5Vm0PmK9sGv8B9GyaBqhUaLtgsKWwIMu+mxpxZNRp1Ddk1hBN8mnTdIXQHB/nsOubLQlKqWAwbsBoiEGi8zFEPy5CxnYgVFKRM7CrqLbuocsETHlGEq1llU71H0VwLkc3XyzPduakqM=) 2026-02-04 00:21:23.222612 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG9v2wjZc2FC3gg9WrXKf0ZtJPgJ6tLamPBinIxPpUDdnedFhz08pWCjK6oNNngPU+8/ZcHKkRn7e/qB4/cL390=) 2026-02-04 00:21:23.222625 | orchestrator | 2026-02-04 00:21:23.222638 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:23.222650 | orchestrator | Wednesday 04 February 2026 00:21:22 +0000 (0:00:01.047) 0:00:20.238 **** 2026-02-04 00:21:23.222662 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAzY23N5Oy8c//cqVee5phzfflZdbkd8+itI9Q8fyUF1qQ/8eI2JLN/fRIILTx2067+CScS9XL/kdGrt+klfFfE=) 2026-02-04 00:21:23.222675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILsbCLhwFYAZvKvyuZXVhl0IeY2FzHZZkzAA8hRV/rR+) 2026-02-04 00:21:23.222705 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDC/p4lpWu7kBcCttQO/2Mi5iEGci0LjoCLAJZds8ljJ0GNW58v2Dm6qZJqUWUYv1tXJcOefqCc9panFLDRTZN5FOphXVVSJnxjeEw2uqKImBtWlyvdfz2cGYAPXYC3929VhEG0secsEvMvD69Nye7DPP4K6wTFYjasCvKOLCFKM20z7hDZ93yUa6PgvN5rStNBNlVYPfmGMKrspuCwtZymRKnHZj42XeApNXNy4wRMrsY9twdYF0L4MZcTaUgjYiTUiAgZRAm4k5b7JVRe8rHz2iw5UAaxbKBFyf8oiO5ooSAlMSGAtbHRBMu0VQsbgBN34+AR/kSvKJNsSCr6ytPR2ETDeM4cOdtwgBV0Qk4s5b5TecWCWtUa/WQ+MGcQIa2XNU8g6affXPGOwsmt/UWPgEv61v2jlxVEBc0HeFjBQvcYqxid23zAj5U0zhLsG52wN0qZLlCgXmwGp4yf2GDbyFaaC+MABXjL0dJ+JpLttWWDi1aUrct++IZ+IvLEOPc=) 2026-02-04 00:21:27.303958 | orchestrator | 2026-02-04 00:21:27.304067 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:27.304092 | orchestrator | Wednesday 04 February 2026 00:21:23 +0000 (0:00:01.046) 0:00:21.285 **** 2026-02-04 00:21:27.304108 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPiMthc21//QoVvfO22LJa6ja7jklwi4WCRxdU3DjLaF8jc4lAhks8niresvJ5k8dEF+zyJb6Cp00+mv+Y65YDcc4ViPycDy9VcW2MBpqu1UzNl+SSADMYmu6ocDPwfljXD5DtmT+R8H1CeEK2MsB89M8pOYxPy50+RkiRPe53mPd90ebXiG0o/dwTosrop7aDjIgimDAxsdGgB2w5iTxAmeW2+SC+BVt1f3a2BQld2Tp63ehzMmDCzIsw/M5KkBOKjnTv4FHyo94zGPk1AAghrxkuMQicETV1Di6Wp1lussOMUEh4ZRcF+LZfFJtVGJcU9gvT+zkS2W0+W/+gYZUxbdE3qfPTDt29C6naxw6Dv4ZZaeCmp12dGdGo+62P/DOKA36oe5oGX1GKktUwVl+sntFVOx889qiqhrQGZraGm3sntXjtWbLgcWj6czbL4+QR5wvHljahxG/wQbhkskPdMUzVAj4Snq7s1EGzfPIOxdwNKulcF84Ky3yc+FPtzYE=) 2026-02-04 00:21:27.304134 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLUjTkjGDEkSR1j/ahs+Bh2kljLUQkvzqxpy6rlxrjFPlQTS84peJ98cDy7K71a2ij9h9ZxU9jpxicWHF90lUjY=) 2026-02-04 00:21:27.304147 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8iFTpyXpXjvSXhqVphCV1167PsXMGCKahIQkytYppO) 2026-02-04 00:21:27.304166 | orchestrator | 2026-02-04 00:21:27.304175 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:27.304184 | orchestrator | Wednesday 04 February 2026 00:21:24 +0000 (0:00:00.976) 0:00:22.261 **** 2026-02-04 00:21:27.304193 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOz8hqV8V6CjvYFGtG2UdLwpXGoIVfNkc8yawRmp2/Y7p8GOA9TLtwP/1ktyvykkbKAsFKVN6IZVnq7kg21ISnU=) 2026-02-04 00:21:27.304203 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+Vl6ZAoMQv2jkz3/kgTbVCsTOJwrr6t0/LikVnUuviGzylmuoqx3Vi1zD/98nXSj4MXlazU0ESYuyjJQ1ULFTEQG8OjDh51Dmmn1MtexCTBKobHZ12+5B5GUfZzY11Oh8NM5VBWNR2whH1joNdoprfQMxL423BsLpa/5ZHDYavACTt6dkalf1xPjX6Xr2NYKk6X8MVOfpU2Pz/Q+fNq+k572e6tMxHVIpu82x7ey1OQwF2GvZHJ4mdFl9JDLfeT3esm/Ct21F2qTDt3QwNHokdsJ01Zi5lwOzSCWeOdgnsFpuqX56ZTBpCIcIZg3+4JLNpkox6ejMgt4ofZzi58UXhRRXy+oX3UXO4bnNH7Afjurd5B578jCQYvGTYh7n2cS+AqqUM1lJFo51a+xqc46QOwYTdaE2a86Q5lLQZdbObcvTYFSJydhnzKjuiqyvPULWR6UcS/mk27ZQ94oZo5X5XHGpvQ08MQMeBPsUMwiKm/9Tlnz6ROf/jmkiKGuyiFE=) 2026-02-04 00:21:27.304213 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA6hK2PBgp5qn/b5kYGimXeuJMkaeUmWPEru7DBPVeCu) 2026-02-04 00:21:27.304222 | orchestrator | 2026-02-04 00:21:27.304231 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 00:21:27.304240 | orchestrator | Wednesday 04 February 2026 00:21:25 +0000 (0:00:01.039) 0:00:23.301 **** 2026-02-04 00:21:27.304249 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIELOiVZiPxwOTQMK5RElWYCyBUvg7dnnoHddmRr3g7NV) 2026-02-04 00:21:27.304276 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRb/WJ09Nr8RJE7rBL8d6RJwSpQkZ6pWVzYvybZ85hz5vnfK53l5K8V9+2JPEmUc/OqOeGNLbCcW4B8dNVPP8t2+0YiZaEGi1cmTH3xMl+rG/XcM3EPGjPR2K15KqhD4GoN17m7XC+EgSXwVaRIeHLx4OxHspFU6Y7vb7YJtsv2MoDMGgGj/DuxE8H4eZtMOKvFDfJDH9oyJujlKaAjlZ7Fb6sPZGlpCHdvwJKus7s0bm355SdQtd+gT0sWmBvMVGIlYZvPQ4cSJs9Aj587mTtSobmoyRH+Tqoa9hSg/Q6BrReFLkWO5kpj/TPKnq177h059lHdouNQqjZveQOm763pcxzw4+xfxmMfvYeOpBXa0YUkCeXMh4HLnYo88U6ZzwIx3d58mSylJ6YXZJdYxvBKZVa5HEKV+E646muhEIjrbVhi2DjwE3LtYVkG/3cBCSb+DudcB7gWD5yHbYOf4Mp+mFAT7uB4oNFU1fq8KuSjVPC7U4E+azc9IcYT6397UU=) 2026-02-04 00:21:27.304286 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDIaOjZJzi1ZpOFo5xvT+jXGXORbxeSLsJDjmZygcG+R45Gl/KzGCEolIrAaaarOW4jI24VRwvrwOhHxHilQNAg=) 2026-02-04 00:21:27.304295 | orchestrator | 2026-02-04 00:21:27.304304 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-04 00:21:27.304346 | orchestrator | Wednesday 04 February 2026 00:21:26 +0000 (0:00:00.982) 0:00:24.283 **** 2026-02-04 00:21:27.304356 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-04 00:21:27.304365 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-04 00:21:27.304374 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-04 00:21:27.304382 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-04 00:21:27.304391 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 00:21:27.304400 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 00:21:27.304408 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 00:21:27.304417 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:21:27.304426 | orchestrator | 2026-02-04 00:21:27.304451 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-04 00:21:27.304499 | orchestrator | Wednesday 04 February 2026 00:21:26 +0000 (0:00:00.153) 0:00:24.436 **** 2026-02-04 00:21:27.304511 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:21:27.304520 | orchestrator | 2026-02-04 00:21:27.304531 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-04 00:21:27.304541 | orchestrator | Wednesday 04 February 2026 00:21:26 +0000 (0:00:00.050) 0:00:24.487 **** 2026-02-04 00:21:27.304551 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:21:27.304561 | orchestrator | 2026-02-04 00:21:27.304572 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-04 00:21:27.304581 | orchestrator | Wednesday 04 February 2026 00:21:26 +0000 (0:00:00.060) 0:00:24.547 **** 2026-02-04 00:21:27.304592 | orchestrator | changed: [testbed-manager] 2026-02-04 00:21:27.304601 | orchestrator | 2026-02-04 00:21:27.304611 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:21:27.304622 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:21:27.304633 | orchestrator | 2026-02-04 00:21:27.304643 | orchestrator | 2026-02-04 00:21:27.304653 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:21:27.304663 | orchestrator | Wednesday 04 February 2026 00:21:27 +0000 (0:00:00.654) 0:00:25.202 **** 2026-02-04 00:21:27.304678 | orchestrator | =============================================================================== 2026-02-04 00:21:27.304688 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.63s 2026-02-04 00:21:27.304699 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.75s 2026-02-04 00:21:27.304709 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-04 00:21:27.304719 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2026-02-04 00:21:27.304729 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-04 00:21:27.304739 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-04 00:21:27.304749 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-04 00:21:27.304759 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2026-02-04 00:21:27.304769 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-02-04 00:21:27.304780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2026-02-04 00:21:27.304790 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2026-02-04 00:21:27.304801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2026-02-04 00:21:27.304810 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2026-02-04 00:21:27.304818 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.89s 2026-02-04 00:21:27.304827 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.89s 2026-02-04 00:21:27.304842 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.84s 2026-02-04 00:21:27.304851 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.65s 2026-02-04 00:21:27.304860 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-04 00:21:27.304869 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2026-02-04 00:21:27.304878 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.15s 2026-02-04 00:21:27.555774 | orchestrator | + osism apply squid 2026-02-04 00:21:39.585701 | orchestrator | 2026-02-04 00:21:39 | INFO  | Task 26101669-3acf-47bb-8e14-fefb1dadef5a (squid) was prepared for execution. 2026-02-04 00:21:39.585791 | orchestrator | 2026-02-04 00:21:39 | INFO  | It takes a moment until task 26101669-3acf-47bb-8e14-fefb1dadef5a (squid) has been started and output is visible here. 2026-02-04 00:23:37.441695 | orchestrator | 2026-02-04 00:23:37.441814 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-04 00:23:37.441828 | orchestrator | 2026-02-04 00:23:37.441836 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-04 00:23:37.441845 | orchestrator | Wednesday 04 February 2026 00:21:43 +0000 (0:00:00.117) 0:00:00.117 **** 2026-02-04 00:23:37.441853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:23:37.441862 | orchestrator | 2026-02-04 00:23:37.441870 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-04 00:23:37.441878 | orchestrator | Wednesday 04 February 2026 00:21:43 +0000 (0:00:00.087) 0:00:00.204 **** 2026-02-04 00:23:37.441886 | orchestrator | ok: [testbed-manager] 2026-02-04 00:23:37.441948 | orchestrator | 2026-02-04 00:23:37.441959 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-04 00:23:37.441967 | orchestrator | Wednesday 04 February 2026 00:21:44 +0000 (0:00:01.032) 0:00:01.236 **** 2026-02-04 00:23:37.441975 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-04 00:23:37.441983 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-04 00:23:37.441990 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-04 00:23:37.441998 | orchestrator | 2026-02-04 00:23:37.442006 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-04 00:23:37.442047 | orchestrator | Wednesday 04 February 2026 00:21:45 +0000 (0:00:01.002) 0:00:02.239 **** 2026-02-04 00:23:37.442057 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-04 00:23:37.442065 | orchestrator | 2026-02-04 00:23:37.442073 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-04 00:23:37.442080 | orchestrator | Wednesday 04 February 2026 00:21:46 +0000 (0:00:00.882) 0:00:03.122 **** 2026-02-04 00:23:37.442088 | orchestrator | ok: [testbed-manager] 2026-02-04 00:23:37.442095 | orchestrator | 2026-02-04 00:23:37.442103 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-04 00:23:37.442111 | orchestrator | Wednesday 04 February 2026 00:21:46 +0000 (0:00:00.316) 0:00:03.439 **** 2026-02-04 00:23:37.442118 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:37.442126 | orchestrator | 2026-02-04 00:23:37.442134 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-04 00:23:37.442142 | orchestrator | Wednesday 04 February 2026 00:21:47 +0000 (0:00:00.795) 0:00:04.234 **** 2026-02-04 00:23:37.442149 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-04 00:23:37.442157 | orchestrator | ok: [testbed-manager] 2026-02-04 00:23:37.442169 | orchestrator | 2026-02-04 00:23:37.442176 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-04 00:23:37.442184 | orchestrator | Wednesday 04 February 2026 00:22:17 +0000 (0:00:29.856) 0:00:34.091 **** 2026-02-04 00:23:37.442210 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:37.442218 | orchestrator | 2026-02-04 00:23:37.442225 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-04 00:23:37.442233 | orchestrator | Wednesday 04 February 2026 00:22:36 +0000 (0:00:19.478) 0:00:53.570 **** 2026-02-04 00:23:37.442241 | orchestrator | Pausing for 60 seconds 2026-02-04 00:23:37.442249 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:37.442257 | orchestrator | 2026-02-04 00:23:37.442265 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-04 00:23:37.442274 | orchestrator | Wednesday 04 February 2026 00:23:36 +0000 (0:01:00.072) 0:01:53.642 **** 2026-02-04 00:23:37.442282 | orchestrator | ok: [testbed-manager] 2026-02-04 00:23:37.442291 | orchestrator | 2026-02-04 00:23:37.442299 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-04 00:23:37.442308 | orchestrator | Wednesday 04 February 2026 00:23:36 +0000 (0:00:00.062) 0:01:53.705 **** 2026-02-04 00:23:37.442316 | orchestrator | changed: [testbed-manager] 2026-02-04 00:23:37.442325 | orchestrator | 2026-02-04 00:23:37.442333 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:23:37.442342 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:23:37.442350 | orchestrator | 2026-02-04 00:23:37.442359 | orchestrator | 2026-02-04 00:23:37.442409 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:23:37.442419 | orchestrator | Wednesday 04 February 2026 00:23:37 +0000 (0:00:00.552) 0:01:54.258 **** 2026-02-04 00:23:37.442428 | orchestrator | =============================================================================== 2026-02-04 00:23:37.442437 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-02-04 00:23:37.442445 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 29.86s 2026-02-04 00:23:37.442485 | orchestrator | osism.services.squid : Restart squid service --------------------------- 19.48s 2026-02-04 00:23:37.442493 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.03s 2026-02-04 00:23:37.442501 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.00s 2026-02-04 00:23:37.442508 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.88s 2026-02-04 00:23:37.442515 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2026-02-04 00:23:37.442523 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.55s 2026-02-04 00:23:37.442531 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-02-04 00:23:37.442538 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-02-04 00:23:37.442545 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-04 00:23:37.692178 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-04 00:23:37.693238 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-04 00:23:37.740882 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 00:23:37.740982 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-04 00:23:37.747414 | orchestrator | + set -e 2026-02-04 00:23:37.747476 | orchestrator | + NAMESPACE=kolla/release 2026-02-04 00:23:37.747491 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-04 00:23:37.752608 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-04 00:23:37.811945 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-04 00:23:37.813176 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-04 00:23:49.750804 | orchestrator | 2026-02-04 00:23:49 | INFO  | Task 9e777863-25fe-440d-92d5-6610bc349a6f (operator) was prepared for execution. 2026-02-04 00:23:49.750913 | orchestrator | 2026-02-04 00:23:49 | INFO  | It takes a moment until task 9e777863-25fe-440d-92d5-6610bc349a6f (operator) has been started and output is visible here. 2026-02-04 00:24:05.052680 | orchestrator | 2026-02-04 00:24:05.052756 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-04 00:24:05.052767 | orchestrator | 2026-02-04 00:24:05.052775 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 00:24:05.052783 | orchestrator | Wednesday 04 February 2026 00:23:53 +0000 (0:00:00.127) 0:00:00.127 **** 2026-02-04 00:24:05.052790 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:24:05.052798 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:24:05.052805 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:24:05.052811 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:24:05.052818 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:24:05.052825 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:24:05.052832 | orchestrator | 2026-02-04 00:24:05.052844 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-04 00:24:05.052856 | orchestrator | Wednesday 04 February 2026 00:23:56 +0000 (0:00:03.256) 0:00:03.383 **** 2026-02-04 00:24:05.052866 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:24:05.052878 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:24:05.052889 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:24:05.052901 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:24:05.052914 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:24:05.052925 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:24:05.052936 | orchestrator | 2026-02-04 00:24:05.052948 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-04 00:24:05.052960 | orchestrator | 2026-02-04 00:24:05.052972 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-04 00:24:05.052984 | orchestrator | Wednesday 04 February 2026 00:23:57 +0000 (0:00:00.719) 0:00:04.103 **** 2026-02-04 00:24:05.052995 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:24:05.053005 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:24:05.053012 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:24:05.053019 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:24:05.053026 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:24:05.053032 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:24:05.053040 | orchestrator | 2026-02-04 00:24:05.053047 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-04 00:24:05.053054 | orchestrator | Wednesday 04 February 2026 00:23:57 +0000 (0:00:00.126) 0:00:04.229 **** 2026-02-04 00:24:05.053074 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:24:05.053081 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:24:05.053087 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:24:05.053094 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:24:05.053101 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:24:05.053107 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:24:05.053114 | orchestrator | 2026-02-04 00:24:05.053121 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-04 00:24:05.053127 | orchestrator | Wednesday 04 February 2026 00:23:57 +0000 (0:00:00.128) 0:00:04.357 **** 2026-02-04 00:24:05.053134 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:24:05.053142 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:24:05.053149 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:24:05.053156 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:24:05.053162 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:24:05.053169 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:24:05.053176 | orchestrator | 2026-02-04 00:24:05.053183 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-04 00:24:05.053189 | orchestrator | Wednesday 04 February 2026 00:23:58 +0000 (0:00:00.634) 0:00:04.992 **** 2026-02-04 00:24:05.053196 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:24:05.053203 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:24:05.053209 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:24:05.053216 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:24:05.053222 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:24:05.053229 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:24:05.053236 | orchestrator | 2026-02-04 00:24:05.053244 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-04 00:24:05.053270 | orchestrator | Wednesday 04 February 2026 00:23:59 +0000 (0:00:00.778) 0:00:05.771 **** 2026-02-04 00:24:05.053278 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-04 00:24:05.053287 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-04 00:24:05.053295 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-04 00:24:05.053303 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-04 00:24:05.053311 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-04 00:24:05.053319 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-04 00:24:05.053327 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-04 00:24:05.053335 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-04 00:24:05.053343 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-04 00:24:05.053374 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-04 00:24:05.053383 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-04 00:24:05.053391 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-04 00:24:05.053399 | orchestrator | 2026-02-04 00:24:05.053407 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-04 00:24:05.053415 | orchestrator | Wednesday 04 February 2026 00:24:00 +0000 (0:00:01.136) 0:00:06.908 **** 2026-02-04 00:24:05.053424 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:24:05.053432 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:24:05.053440 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:24:05.053448 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:24:05.053456 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:24:05.053464 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:24:05.053472 | orchestrator | 2026-02-04 00:24:05.053481 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-04 00:24:05.053491 | orchestrator | Wednesday 04 February 2026 00:24:01 +0000 (0:00:01.128) 0:00:08.036 **** 2026-02-04 00:24:05.053499 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-04 00:24:05.053507 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-04 00:24:05.053515 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-04 00:24:05.053524 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:24:05.053545 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:24:05.053554 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:24:05.053562 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:24:05.053570 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:24:05.053580 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 00:24:05.053588 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-04 00:24:05.053597 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-04 00:24:05.053605 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-04 00:24:05.053611 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-04 00:24:05.053618 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-04 00:24:05.053625 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-04 00:24:05.053631 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:24:05.053638 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:24:05.053644 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:24:05.053651 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:24:05.053657 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:24:05.053671 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-04 00:24:05.053678 | orchestrator | 2026-02-04 00:24:05.053684 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-04 00:24:05.053692 | orchestrator | Wednesday 04 February 2026 00:24:02 +0000 (0:00:01.182) 0:00:09.219 **** 2026-02-04 00:24:05.053698 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:05.053705 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:05.053711 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:05.053718 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:05.053725 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:05.053732 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:05.053743 | orchestrator | 2026-02-04 00:24:05.053754 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-04 00:24:05.053766 | orchestrator | Wednesday 04 February 2026 00:24:02 +0000 (0:00:00.147) 0:00:09.366 **** 2026-02-04 00:24:05.053777 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:05.053787 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:05.053798 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:05.053808 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:05.053819 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:05.053831 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:05.053843 | orchestrator | 2026-02-04 00:24:05.053854 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-04 00:24:05.053865 | orchestrator | Wednesday 04 February 2026 00:24:03 +0000 (0:00:00.199) 0:00:09.566 **** 2026-02-04 00:24:05.053878 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:24:05.053890 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:24:05.053902 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:24:05.053913 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:24:05.053924 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:24:05.053936 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:24:05.053948 | orchestrator | 2026-02-04 00:24:05.053960 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-04 00:24:05.053971 | orchestrator | Wednesday 04 February 2026 00:24:03 +0000 (0:00:00.694) 0:00:10.261 **** 2026-02-04 00:24:05.053982 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:05.053993 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:05.054004 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:05.054064 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:05.054079 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:05.054092 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:05.054103 | orchestrator | 2026-02-04 00:24:05.054115 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-04 00:24:05.054144 | orchestrator | Wednesday 04 February 2026 00:24:03 +0000 (0:00:00.171) 0:00:10.432 **** 2026-02-04 00:24:05.054157 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-04 00:24:05.054169 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-04 00:24:05.054180 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:24:05.054191 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:24:05.054203 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:24:05.054214 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:24:05.054227 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 00:24:05.054238 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 00:24:05.054250 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:24:05.054261 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:24:05.054273 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 00:24:05.054285 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:24:05.054296 | orchestrator | 2026-02-04 00:24:05.054307 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-04 00:24:05.054318 | orchestrator | Wednesday 04 February 2026 00:24:04 +0000 (0:00:00.693) 0:00:11.126 **** 2026-02-04 00:24:05.054338 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:05.054378 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:05.054390 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:05.054411 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:05.054422 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:05.054434 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:05.054445 | orchestrator | 2026-02-04 00:24:05.054457 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-04 00:24:05.054469 | orchestrator | Wednesday 04 February 2026 00:24:04 +0000 (0:00:00.172) 0:00:11.299 **** 2026-02-04 00:24:05.054480 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:05.054492 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:05.054503 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:05.054514 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:05.054534 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:06.349178 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:06.349314 | orchestrator | 2026-02-04 00:24:06.349343 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-04 00:24:06.349396 | orchestrator | Wednesday 04 February 2026 00:24:05 +0000 (0:00:00.177) 0:00:11.477 **** 2026-02-04 00:24:06.349416 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:06.349435 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:06.349454 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:06.349472 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:06.349489 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:06.349509 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:06.349528 | orchestrator | 2026-02-04 00:24:06.349546 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-04 00:24:06.349564 | orchestrator | Wednesday 04 February 2026 00:24:05 +0000 (0:00:00.171) 0:00:11.649 **** 2026-02-04 00:24:06.349575 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:24:06.349586 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:24:06.349596 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:24:06.349607 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:24:06.349618 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:24:06.349629 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:24:06.349640 | orchestrator | 2026-02-04 00:24:06.349651 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-04 00:24:06.349662 | orchestrator | Wednesday 04 February 2026 00:24:05 +0000 (0:00:00.672) 0:00:12.321 **** 2026-02-04 00:24:06.349672 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:24:06.349683 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:24:06.349694 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:24:06.349705 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:24:06.349716 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:24:06.349727 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:24:06.349738 | orchestrator | 2026-02-04 00:24:06.349749 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:24:06.349780 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:24:06.349794 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:24:06.349804 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:24:06.349815 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:24:06.349826 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:24:06.349864 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 00:24:06.349875 | orchestrator | 2026-02-04 00:24:06.349886 | orchestrator | 2026-02-04 00:24:06.349897 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:24:06.349908 | orchestrator | Wednesday 04 February 2026 00:24:06 +0000 (0:00:00.226) 0:00:12.547 **** 2026-02-04 00:24:06.349919 | orchestrator | =============================================================================== 2026-02-04 00:24:06.349930 | orchestrator | Gathering Facts --------------------------------------------------------- 3.26s 2026-02-04 00:24:06.349940 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2026-02-04 00:24:06.349952 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2026-02-04 00:24:06.349963 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.13s 2026-02-04 00:24:06.349973 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2026-02-04 00:24:06.349984 | orchestrator | Do not require tty for all users ---------------------------------------- 0.72s 2026-02-04 00:24:06.349995 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.69s 2026-02-04 00:24:06.350005 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.69s 2026-02-04 00:24:06.350070 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-02-04 00:24:06.350084 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.63s 2026-02-04 00:24:06.350094 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-02-04 00:24:06.350105 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-04 00:24:06.350116 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.18s 2026-02-04 00:24:06.350127 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-02-04 00:24:06.350138 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-04 00:24:06.350148 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2026-02-04 00:24:06.350159 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2026-02-04 00:24:06.350170 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2026-02-04 00:24:06.350181 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2026-02-04 00:24:06.612384 | orchestrator | + osism apply --environment custom facts 2026-02-04 00:24:08.497557 | orchestrator | 2026-02-04 00:24:08 | INFO  | Trying to run play facts in environment custom 2026-02-04 00:24:18.577077 | orchestrator | 2026-02-04 00:24:18 | INFO  | Task 68602f03-0c64-476b-92fb-84c3f4780429 (facts) was prepared for execution. 2026-02-04 00:24:18.577196 | orchestrator | 2026-02-04 00:24:18 | INFO  | It takes a moment until task 68602f03-0c64-476b-92fb-84c3f4780429 (facts) has been started and output is visible here. 2026-02-04 00:25:01.833083 | orchestrator | 2026-02-04 00:25:01.833228 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-04 00:25:01.833245 | orchestrator | 2026-02-04 00:25:01.833256 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 00:25:01.833266 | orchestrator | Wednesday 04 February 2026 00:24:22 +0000 (0:00:00.080) 0:00:00.080 **** 2026-02-04 00:25:01.833277 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:01.833288 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:01.833299 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:01.833391 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:01.833402 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:01.833412 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:01.833422 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:01.833456 | orchestrator | 2026-02-04 00:25:01.833466 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-04 00:25:01.833477 | orchestrator | Wednesday 04 February 2026 00:24:23 +0000 (0:00:01.302) 0:00:01.383 **** 2026-02-04 00:25:01.833486 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:01.833496 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:01.833506 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:01.833516 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:01.833526 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:01.833536 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:01.833545 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:01.833555 | orchestrator | 2026-02-04 00:25:01.833565 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-04 00:25:01.833575 | orchestrator | 2026-02-04 00:25:01.833586 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 00:25:01.833597 | orchestrator | Wednesday 04 February 2026 00:24:24 +0000 (0:00:01.176) 0:00:02.559 **** 2026-02-04 00:25:01.833608 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.833619 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.833630 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.833640 | orchestrator | 2026-02-04 00:25:01.833652 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 00:25:01.833663 | orchestrator | Wednesday 04 February 2026 00:24:24 +0000 (0:00:00.090) 0:00:02.649 **** 2026-02-04 00:25:01.833674 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.833685 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.833696 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.833706 | orchestrator | 2026-02-04 00:25:01.833717 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 00:25:01.833728 | orchestrator | Wednesday 04 February 2026 00:24:25 +0000 (0:00:00.197) 0:00:02.846 **** 2026-02-04 00:25:01.833739 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.833750 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.833761 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.833770 | orchestrator | 2026-02-04 00:25:01.833779 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 00:25:01.833788 | orchestrator | Wednesday 04 February 2026 00:24:25 +0000 (0:00:00.199) 0:00:03.046 **** 2026-02-04 00:25:01.833799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:25:01.833821 | orchestrator | 2026-02-04 00:25:01.833831 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 00:25:01.833842 | orchestrator | Wednesday 04 February 2026 00:24:25 +0000 (0:00:00.123) 0:00:03.169 **** 2026-02-04 00:25:01.833852 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.833861 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.833870 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.833880 | orchestrator | 2026-02-04 00:25:01.833914 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 00:25:01.833932 | orchestrator | Wednesday 04 February 2026 00:24:25 +0000 (0:00:00.433) 0:00:03.603 **** 2026-02-04 00:25:01.833959 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:01.833977 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:01.833985 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:01.833994 | orchestrator | 2026-02-04 00:25:01.834002 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 00:25:01.834010 | orchestrator | Wednesday 04 February 2026 00:24:25 +0000 (0:00:00.136) 0:00:03.739 **** 2026-02-04 00:25:01.834066 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:01.834074 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:01.834083 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:01.834091 | orchestrator | 2026-02-04 00:25:01.834100 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 00:25:01.834116 | orchestrator | Wednesday 04 February 2026 00:24:26 +0000 (0:00:01.065) 0:00:04.805 **** 2026-02-04 00:25:01.834125 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.834133 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.834148 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.834157 | orchestrator | 2026-02-04 00:25:01.834165 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 00:25:01.834174 | orchestrator | Wednesday 04 February 2026 00:24:27 +0000 (0:00:00.458) 0:00:05.264 **** 2026-02-04 00:25:01.834182 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:01.834190 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:01.834199 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:01.834207 | orchestrator | 2026-02-04 00:25:01.834273 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 00:25:01.834282 | orchestrator | Wednesday 04 February 2026 00:24:28 +0000 (0:00:01.075) 0:00:06.339 **** 2026-02-04 00:25:01.834290 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:01.834299 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:01.834349 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:01.834358 | orchestrator | 2026-02-04 00:25:01.834366 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-04 00:25:01.834375 | orchestrator | Wednesday 04 February 2026 00:24:45 +0000 (0:00:16.609) 0:00:22.949 **** 2026-02-04 00:25:01.834384 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:01.834392 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:01.834401 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:01.834410 | orchestrator | 2026-02-04 00:25:01.834419 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-04 00:25:01.834454 | orchestrator | Wednesday 04 February 2026 00:24:45 +0000 (0:00:00.086) 0:00:23.036 **** 2026-02-04 00:25:01.834464 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:01.834473 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:01.834482 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:01.834491 | orchestrator | 2026-02-04 00:25:01.834500 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 00:25:01.834508 | orchestrator | Wednesday 04 February 2026 00:24:52 +0000 (0:00:07.747) 0:00:30.783 **** 2026-02-04 00:25:01.834517 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.834526 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.834535 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.834544 | orchestrator | 2026-02-04 00:25:01.834552 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-04 00:25:01.834561 | orchestrator | Wednesday 04 February 2026 00:24:53 +0000 (0:00:00.443) 0:00:31.227 **** 2026-02-04 00:25:01.834570 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-04 00:25:01.834580 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-04 00:25:01.834588 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-04 00:25:01.834597 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-04 00:25:01.834606 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-04 00:25:01.834619 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-04 00:25:01.834628 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-04 00:25:01.834636 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-04 00:25:01.834645 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-04 00:25:01.834654 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-04 00:25:01.834663 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-04 00:25:01.834671 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-04 00:25:01.834680 | orchestrator | 2026-02-04 00:25:01.834689 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 00:25:01.834705 | orchestrator | Wednesday 04 February 2026 00:24:56 +0000 (0:00:03.427) 0:00:34.655 **** 2026-02-04 00:25:01.834713 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.834722 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.834731 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.834739 | orchestrator | 2026-02-04 00:25:01.834748 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:25:01.834756 | orchestrator | 2026-02-04 00:25:01.834765 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:25:01.834774 | orchestrator | Wednesday 04 February 2026 00:24:58 +0000 (0:00:01.320) 0:00:35.975 **** 2026-02-04 00:25:01.834783 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:01.834791 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:01.834800 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:01.834808 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:01.834817 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:01.834826 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:01.834833 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:01.834841 | orchestrator | 2026-02-04 00:25:01.834850 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:25:01.834859 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:25:01.834867 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:25:01.834876 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:25:01.834883 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:25:01.834891 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:25:01.834899 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:25:01.834908 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:25:01.834916 | orchestrator | 2026-02-04 00:25:01.834925 | orchestrator | 2026-02-04 00:25:01.834934 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:25:01.834943 | orchestrator | Wednesday 04 February 2026 00:25:01 +0000 (0:00:03.674) 0:00:39.650 **** 2026-02-04 00:25:01.834951 | orchestrator | =============================================================================== 2026-02-04 00:25:01.834960 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.61s 2026-02-04 00:25:01.834969 | orchestrator | Install required packages (Debian) -------------------------------------- 7.75s 2026-02-04 00:25:01.834978 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.68s 2026-02-04 00:25:01.834987 | orchestrator | Copy fact files --------------------------------------------------------- 3.43s 2026-02-04 00:25:01.834995 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.32s 2026-02-04 00:25:01.835003 | orchestrator | Create custom facts directory ------------------------------------------- 1.30s 2026-02-04 00:25:01.835018 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-02-04 00:25:02.056094 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2026-02-04 00:25:02.056218 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2026-02-04 00:25:02.056233 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2026-02-04 00:25:02.056245 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-02-04 00:25:02.056349 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-02-04 00:25:02.056365 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2026-02-04 00:25:02.056377 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-02-04 00:25:02.056388 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-02-04 00:25:02.056400 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.12s 2026-02-04 00:25:02.056413 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-02-04 00:25:02.056425 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-02-04 00:25:02.337851 | orchestrator | + osism apply bootstrap 2026-02-04 00:25:14.330128 | orchestrator | 2026-02-04 00:25:14 | INFO  | Task 594ffb64-0306-4e64-81c4-06f691449b46 (bootstrap) was prepared for execution. 2026-02-04 00:25:14.330356 | orchestrator | 2026-02-04 00:25:14 | INFO  | It takes a moment until task 594ffb64-0306-4e64-81c4-06f691449b46 (bootstrap) has been started and output is visible here. 2026-02-04 00:25:29.252031 | orchestrator | 2026-02-04 00:25:29.252155 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-04 00:25:29.252174 | orchestrator | 2026-02-04 00:25:29.252186 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-04 00:25:29.252197 | orchestrator | Wednesday 04 February 2026 00:25:18 +0000 (0:00:00.113) 0:00:00.113 **** 2026-02-04 00:25:29.252209 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:29.252221 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:29.252232 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:29.252243 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:29.252254 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:29.252265 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:29.252275 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:29.252350 | orchestrator | 2026-02-04 00:25:29.252366 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:25:29.252377 | orchestrator | 2026-02-04 00:25:29.252389 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:25:29.252400 | orchestrator | Wednesday 04 February 2026 00:25:18 +0000 (0:00:00.162) 0:00:00.276 **** 2026-02-04 00:25:29.252411 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:29.252424 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:29.252443 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:29.252461 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:29.252479 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:29.252497 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:29.252516 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:29.252534 | orchestrator | 2026-02-04 00:25:29.252553 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-04 00:25:29.252572 | orchestrator | 2026-02-04 00:25:29.252590 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:25:29.252609 | orchestrator | Wednesday 04 February 2026 00:25:21 +0000 (0:00:03.674) 0:00:03.950 **** 2026-02-04 00:25:29.252630 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-04 00:25:29.252651 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-04 00:25:29.252670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-04 00:25:29.252689 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-04 00:25:29.252708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:25:29.252728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-04 00:25:29.252747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:25:29.252767 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-04 00:25:29.252785 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 00:25:29.252839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:25:29.252859 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-04 00:25:29.252878 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 00:25:29.252898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 00:25:29.252917 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-04 00:25:29.252938 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-04 00:25:29.252957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 00:25:29.252977 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 00:25:29.252998 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:29.253017 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 00:25:29.253033 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-04 00:25:29.253044 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:29.253055 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-04 00:25:29.253066 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-04 00:25:29.253077 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-04 00:25:29.253088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 00:25:29.253098 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-04 00:25:29.253109 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 00:25:29.253120 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-04 00:25:29.253131 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-04 00:25:29.253141 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 00:25:29.253154 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 00:25:29.253173 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-04 00:25:29.253189 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 00:25:29.253205 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 00:25:29.253223 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:29.253243 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 00:25:29.253260 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-04 00:25:29.253277 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-04 00:25:29.253318 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:25:29.253331 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-04 00:25:29.253342 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 00:25:29.253353 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-04 00:25:29.253363 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:25:29.253374 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-04 00:25:29.253385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 00:25:29.253396 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:29.253435 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 00:25:29.253454 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 00:25:29.253472 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:25:29.253489 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:25:29.253506 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 00:25:29.253521 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 00:25:29.253539 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 00:25:29.253556 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:25:29.253598 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 00:25:29.253695 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:25:29.253710 | orchestrator | 2026-02-04 00:25:29.253722 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-04 00:25:29.253733 | orchestrator | 2026-02-04 00:25:29.253744 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-04 00:25:29.253756 | orchestrator | Wednesday 04 February 2026 00:25:22 +0000 (0:00:00.360) 0:00:04.310 **** 2026-02-04 00:25:29.253775 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:29.253794 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:29.253813 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:29.253830 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:29.253849 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:29.253865 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:29.253882 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:29.253898 | orchestrator | 2026-02-04 00:25:29.253917 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-04 00:25:29.253937 | orchestrator | Wednesday 04 February 2026 00:25:23 +0000 (0:00:01.165) 0:00:05.476 **** 2026-02-04 00:25:29.253957 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:29.253974 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:29.253993 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:29.254011 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:29.254117 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:29.254137 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:29.254154 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:29.254172 | orchestrator | 2026-02-04 00:25:29.254191 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-04 00:25:29.254211 | orchestrator | Wednesday 04 February 2026 00:25:24 +0000 (0:00:01.195) 0:00:06.671 **** 2026-02-04 00:25:29.254230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:25:29.254250 | orchestrator | 2026-02-04 00:25:29.254269 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-04 00:25:29.254315 | orchestrator | Wednesday 04 February 2026 00:25:24 +0000 (0:00:00.285) 0:00:06.956 **** 2026-02-04 00:25:29.254334 | orchestrator | changed: [testbed-manager] 2026-02-04 00:25:29.254353 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:29.254372 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:29.254391 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:29.254409 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:29.254429 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:29.254448 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:29.254466 | orchestrator | 2026-02-04 00:25:29.254485 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-04 00:25:29.254503 | orchestrator | Wednesday 04 February 2026 00:25:26 +0000 (0:00:01.938) 0:00:08.895 **** 2026-02-04 00:25:29.254523 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:29.254544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:25:29.254566 | orchestrator | 2026-02-04 00:25:29.254586 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-04 00:25:29.254606 | orchestrator | Wednesday 04 February 2026 00:25:27 +0000 (0:00:00.240) 0:00:09.135 **** 2026-02-04 00:25:29.254625 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:29.254644 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:29.254663 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:29.254683 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:29.254701 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:29.254720 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:29.254739 | orchestrator | 2026-02-04 00:25:29.254781 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-04 00:25:29.254800 | orchestrator | Wednesday 04 February 2026 00:25:28 +0000 (0:00:00.977) 0:00:10.112 **** 2026-02-04 00:25:29.254818 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:29.254835 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:29.254853 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:29.254871 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:29.254888 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:29.254905 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:29.254924 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:29.254942 | orchestrator | 2026-02-04 00:25:29.254960 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-04 00:25:29.254978 | orchestrator | Wednesday 04 February 2026 00:25:28 +0000 (0:00:00.578) 0:00:10.691 **** 2026-02-04 00:25:29.254996 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:29.255016 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:29.255035 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:29.255054 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:25:29.255078 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:25:29.255090 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:25:29.255101 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:29.255112 | orchestrator | 2026-02-04 00:25:29.255127 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-04 00:25:29.255146 | orchestrator | Wednesday 04 February 2026 00:25:29 +0000 (0:00:00.428) 0:00:11.119 **** 2026-02-04 00:25:29.255164 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:29.255184 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:29.255227 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:40.955713 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:40.955849 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:25:40.955868 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:25:40.955881 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:25:40.955893 | orchestrator | 2026-02-04 00:25:40.955906 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-04 00:25:40.955919 | orchestrator | Wednesday 04 February 2026 00:25:29 +0000 (0:00:00.242) 0:00:11.362 **** 2026-02-04 00:25:40.955932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:25:40.955961 | orchestrator | 2026-02-04 00:25:40.955973 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-04 00:25:40.955985 | orchestrator | Wednesday 04 February 2026 00:25:29 +0000 (0:00:00.275) 0:00:11.638 **** 2026-02-04 00:25:40.955996 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:25:40.956008 | orchestrator | 2026-02-04 00:25:40.956019 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-04 00:25:40.956030 | orchestrator | Wednesday 04 February 2026 00:25:29 +0000 (0:00:00.265) 0:00:11.904 **** 2026-02-04 00:25:40.956041 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.956053 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.956064 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.956075 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.956086 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.956097 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.956108 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.956119 | orchestrator | 2026-02-04 00:25:40.956130 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-04 00:25:40.956142 | orchestrator | Wednesday 04 February 2026 00:25:31 +0000 (0:00:01.316) 0:00:13.220 **** 2026-02-04 00:25:40.956175 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:40.956187 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:40.956200 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:40.956214 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:40.956226 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:25:40.956239 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:25:40.956253 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:25:40.956266 | orchestrator | 2026-02-04 00:25:40.956323 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-04 00:25:40.956345 | orchestrator | Wednesday 04 February 2026 00:25:31 +0000 (0:00:00.199) 0:00:13.420 **** 2026-02-04 00:25:40.956366 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.956385 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.956402 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.956415 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.956428 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.956440 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.956453 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.956466 | orchestrator | 2026-02-04 00:25:40.956479 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-04 00:25:40.956492 | orchestrator | Wednesday 04 February 2026 00:25:31 +0000 (0:00:00.537) 0:00:13.958 **** 2026-02-04 00:25:40.956504 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:40.956517 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:40.956530 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:40.956543 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:40.956556 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:25:40.956570 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:25:40.956582 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:25:40.956593 | orchestrator | 2026-02-04 00:25:40.956605 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-04 00:25:40.956617 | orchestrator | Wednesday 04 February 2026 00:25:32 +0000 (0:00:00.340) 0:00:14.298 **** 2026-02-04 00:25:40.956628 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.956638 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:40.956649 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:40.956660 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:40.956671 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:40.956682 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:40.956693 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:40.956703 | orchestrator | 2026-02-04 00:25:40.956714 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-04 00:25:40.956725 | orchestrator | Wednesday 04 February 2026 00:25:32 +0000 (0:00:00.541) 0:00:14.840 **** 2026-02-04 00:25:40.956736 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.956747 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:40.956758 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:40.956769 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:40.956779 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:40.956790 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:40.956801 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:40.956811 | orchestrator | 2026-02-04 00:25:40.956822 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-04 00:25:40.956833 | orchestrator | Wednesday 04 February 2026 00:25:33 +0000 (0:00:01.042) 0:00:15.883 **** 2026-02-04 00:25:40.956844 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.956855 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.956876 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.956888 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.956898 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.956909 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.956920 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.956931 | orchestrator | 2026-02-04 00:25:40.956942 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-04 00:25:40.957013 | orchestrator | Wednesday 04 February 2026 00:25:34 +0000 (0:00:01.042) 0:00:16.925 **** 2026-02-04 00:25:40.957047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:25:40.957059 | orchestrator | 2026-02-04 00:25:40.957071 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-04 00:25:40.957082 | orchestrator | Wednesday 04 February 2026 00:25:35 +0000 (0:00:00.270) 0:00:17.196 **** 2026-02-04 00:25:40.957093 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:40.957104 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:25:40.957115 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:40.957126 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:40.957137 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:25:40.957147 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:25:40.957158 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:40.957169 | orchestrator | 2026-02-04 00:25:40.957187 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 00:25:40.957205 | orchestrator | Wednesday 04 February 2026 00:25:36 +0000 (0:00:01.370) 0:00:18.566 **** 2026-02-04 00:25:40.957223 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.957240 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.957258 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.957270 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.957308 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.957320 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.957331 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.957342 | orchestrator | 2026-02-04 00:25:40.957353 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 00:25:40.957364 | orchestrator | Wednesday 04 February 2026 00:25:36 +0000 (0:00:00.206) 0:00:18.773 **** 2026-02-04 00:25:40.957375 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.957385 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.957396 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.957407 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.957417 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.957428 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.957439 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.957449 | orchestrator | 2026-02-04 00:25:40.957460 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 00:25:40.957471 | orchestrator | Wednesday 04 February 2026 00:25:36 +0000 (0:00:00.206) 0:00:18.980 **** 2026-02-04 00:25:40.957482 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.957493 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.957503 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.957514 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.957525 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.957535 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.957546 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.957557 | orchestrator | 2026-02-04 00:25:40.957568 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 00:25:40.957579 | orchestrator | Wednesday 04 February 2026 00:25:37 +0000 (0:00:00.191) 0:00:19.172 **** 2026-02-04 00:25:40.957591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:25:40.957604 | orchestrator | 2026-02-04 00:25:40.957615 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 00:25:40.957626 | orchestrator | Wednesday 04 February 2026 00:25:37 +0000 (0:00:00.262) 0:00:19.435 **** 2026-02-04 00:25:40.957637 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.957647 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.957668 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.957679 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.957690 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.957701 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.957711 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.957722 | orchestrator | 2026-02-04 00:25:40.957733 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 00:25:40.957744 | orchestrator | Wednesday 04 February 2026 00:25:37 +0000 (0:00:00.524) 0:00:19.959 **** 2026-02-04 00:25:40.957755 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:25:40.957766 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:25:40.957776 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:25:40.957787 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:25:40.957798 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:25:40.957809 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:25:40.957819 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:25:40.957830 | orchestrator | 2026-02-04 00:25:40.957841 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 00:25:40.957852 | orchestrator | Wednesday 04 February 2026 00:25:38 +0000 (0:00:00.236) 0:00:20.196 **** 2026-02-04 00:25:40.957863 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.957873 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.957884 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.957895 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.957906 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:25:40.957917 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:25:40.957927 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:25:40.957938 | orchestrator | 2026-02-04 00:25:40.957949 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 00:25:40.957960 | orchestrator | Wednesday 04 February 2026 00:25:39 +0000 (0:00:01.089) 0:00:21.285 **** 2026-02-04 00:25:40.957971 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.957981 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.957992 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.958003 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.958077 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:25:40.958092 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:25:40.958103 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:25:40.958114 | orchestrator | 2026-02-04 00:25:40.958125 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 00:25:40.958136 | orchestrator | Wednesday 04 February 2026 00:25:39 +0000 (0:00:00.539) 0:00:21.825 **** 2026-02-04 00:25:40.958148 | orchestrator | ok: [testbed-manager] 2026-02-04 00:25:40.958167 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:25:40.958179 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:25:40.958190 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:25:40.958212 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:21.209425 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:21.209535 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:21.209550 | orchestrator | 2026-02-04 00:26:21.209563 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 00:26:21.209577 | orchestrator | Wednesday 04 February 2026 00:25:40 +0000 (0:00:01.155) 0:00:22.980 **** 2026-02-04 00:26:21.209588 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.209600 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.209612 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.209623 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:21.209634 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:21.209646 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:21.209657 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:21.209668 | orchestrator | 2026-02-04 00:26:21.209679 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-04 00:26:21.209690 | orchestrator | Wednesday 04 February 2026 00:25:56 +0000 (0:00:15.882) 0:00:38.863 **** 2026-02-04 00:26:21.209701 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.209738 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.209767 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.209779 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.209800 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.209811 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.209822 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.209833 | orchestrator | 2026-02-04 00:26:21.209844 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-04 00:26:21.209855 | orchestrator | Wednesday 04 February 2026 00:25:57 +0000 (0:00:00.212) 0:00:39.075 **** 2026-02-04 00:26:21.209866 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.209877 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.209888 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.209899 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.209909 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.209920 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.209931 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.209942 | orchestrator | 2026-02-04 00:26:21.209953 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-04 00:26:21.209964 | orchestrator | Wednesday 04 February 2026 00:25:57 +0000 (0:00:00.226) 0:00:39.301 **** 2026-02-04 00:26:21.209975 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.209985 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.209996 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.210007 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.210064 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.210078 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.210089 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.210101 | orchestrator | 2026-02-04 00:26:21.210112 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-04 00:26:21.210123 | orchestrator | Wednesday 04 February 2026 00:25:57 +0000 (0:00:00.213) 0:00:39.515 **** 2026-02-04 00:26:21.210137 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:26:21.210151 | orchestrator | 2026-02-04 00:26:21.210162 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-04 00:26:21.210173 | orchestrator | Wednesday 04 February 2026 00:25:57 +0000 (0:00:00.293) 0:00:39.809 **** 2026-02-04 00:26:21.210184 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.210195 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.210206 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.210216 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.210227 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.210238 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.210249 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.210288 | orchestrator | 2026-02-04 00:26:21.210308 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-04 00:26:21.210327 | orchestrator | Wednesday 04 February 2026 00:25:59 +0000 (0:00:01.728) 0:00:41.537 **** 2026-02-04 00:26:21.210345 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:21.210365 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:21.210377 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:21.210388 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:21.210399 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:21.210409 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:21.210420 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:21.210431 | orchestrator | 2026-02-04 00:26:21.210442 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-04 00:26:21.210453 | orchestrator | Wednesday 04 February 2026 00:26:00 +0000 (0:00:01.056) 0:00:42.593 **** 2026-02-04 00:26:21.210464 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.210475 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.210485 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.210496 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.210516 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.210527 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.210538 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.210549 | orchestrator | 2026-02-04 00:26:21.210560 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-04 00:26:21.210571 | orchestrator | Wednesday 04 February 2026 00:26:02 +0000 (0:00:01.717) 0:00:44.311 **** 2026-02-04 00:26:21.210583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:26:21.210595 | orchestrator | 2026-02-04 00:26:21.210620 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-04 00:26:21.210632 | orchestrator | Wednesday 04 February 2026 00:26:02 +0000 (0:00:00.283) 0:00:44.594 **** 2026-02-04 00:26:21.210643 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:21.210654 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:21.210665 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:21.210676 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:21.210686 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:21.210697 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:21.210708 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:21.210719 | orchestrator | 2026-02-04 00:26:21.210747 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-04 00:26:21.210758 | orchestrator | Wednesday 04 February 2026 00:26:03 +0000 (0:00:01.014) 0:00:45.608 **** 2026-02-04 00:26:21.210769 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:26:21.210781 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:26:21.210791 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:26:21.210802 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:26:21.210813 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:26:21.210824 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:26:21.210835 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:26:21.210846 | orchestrator | 2026-02-04 00:26:21.210857 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-04 00:26:21.210868 | orchestrator | Wednesday 04 February 2026 00:26:03 +0000 (0:00:00.237) 0:00:45.845 **** 2026-02-04 00:26:21.210879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:26:21.210890 | orchestrator | 2026-02-04 00:26:21.210901 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-04 00:26:21.210912 | orchestrator | Wednesday 04 February 2026 00:26:04 +0000 (0:00:00.282) 0:00:46.128 **** 2026-02-04 00:26:21.210923 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.210934 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.210945 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.210955 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.210966 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.210977 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.210988 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.210999 | orchestrator | 2026-02-04 00:26:21.211009 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-04 00:26:21.211020 | orchestrator | Wednesday 04 February 2026 00:26:05 +0000 (0:00:01.573) 0:00:47.701 **** 2026-02-04 00:26:21.211031 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:21.211042 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:21.211053 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:21.211064 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:21.211075 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:21.211086 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:21.211096 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:21.211107 | orchestrator | 2026-02-04 00:26:21.211125 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-04 00:26:21.211136 | orchestrator | Wednesday 04 February 2026 00:26:06 +0000 (0:00:01.106) 0:00:48.808 **** 2026-02-04 00:26:21.211147 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:26:21.211158 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:26:21.211169 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:26:21.211180 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:26:21.211191 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:26:21.211201 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:26:21.211213 | orchestrator | changed: [testbed-manager] 2026-02-04 00:26:21.211223 | orchestrator | 2026-02-04 00:26:21.211234 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-04 00:26:21.211245 | orchestrator | Wednesday 04 February 2026 00:26:18 +0000 (0:00:11.367) 0:01:00.175 **** 2026-02-04 00:26:21.211280 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.211291 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.211302 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.211313 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.211324 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.211335 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.211345 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.211356 | orchestrator | 2026-02-04 00:26:21.211367 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-04 00:26:21.211378 | orchestrator | Wednesday 04 February 2026 00:26:19 +0000 (0:00:01.514) 0:01:01.690 **** 2026-02-04 00:26:21.211389 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.211399 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.211410 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.211421 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.211432 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.211442 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.211453 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.211464 | orchestrator | 2026-02-04 00:26:21.211475 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-04 00:26:21.211485 | orchestrator | Wednesday 04 February 2026 00:26:20 +0000 (0:00:00.891) 0:01:02.581 **** 2026-02-04 00:26:21.211496 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.211507 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.211518 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.211528 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.211539 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.211550 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.211560 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.211571 | orchestrator | 2026-02-04 00:26:21.211582 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-04 00:26:21.211593 | orchestrator | Wednesday 04 February 2026 00:26:20 +0000 (0:00:00.199) 0:01:02.781 **** 2026-02-04 00:26:21.211604 | orchestrator | ok: [testbed-manager] 2026-02-04 00:26:21.211615 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:26:21.211625 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:26:21.211636 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:26:21.211647 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:26:21.211658 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:26:21.211668 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:26:21.211679 | orchestrator | 2026-02-04 00:26:21.211695 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-04 00:26:21.211707 | orchestrator | Wednesday 04 February 2026 00:26:20 +0000 (0:00:00.197) 0:01:02.979 **** 2026-02-04 00:26:21.211718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:26:21.211730 | orchestrator | 2026-02-04 00:26:21.211747 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-04 00:28:40.455022 | orchestrator | Wednesday 04 February 2026 00:26:21 +0000 (0:00:00.255) 0:01:03.234 **** 2026-02-04 00:28:40.455138 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:40.455157 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.455169 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.455180 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.455192 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.455203 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.455214 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.455226 | orchestrator | 2026-02-04 00:28:40.455238 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-04 00:28:40.455250 | orchestrator | Wednesday 04 February 2026 00:26:22 +0000 (0:00:01.659) 0:01:04.894 **** 2026-02-04 00:28:40.455261 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:40.455273 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:40.455284 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:40.455295 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:40.455306 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:40.455317 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:40.455328 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:40.455339 | orchestrator | 2026-02-04 00:28:40.455350 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-04 00:28:40.455362 | orchestrator | Wednesday 04 February 2026 00:26:23 +0000 (0:00:00.566) 0:01:05.460 **** 2026-02-04 00:28:40.455425 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:40.455437 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.455448 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.455459 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.455470 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.455481 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.455492 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.455503 | orchestrator | 2026-02-04 00:28:40.455514 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-04 00:28:40.455526 | orchestrator | Wednesday 04 February 2026 00:26:23 +0000 (0:00:00.221) 0:01:05.682 **** 2026-02-04 00:28:40.455538 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:40.455551 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.455563 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.455575 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.455587 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.455599 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.455612 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.455624 | orchestrator | 2026-02-04 00:28:40.455641 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-04 00:28:40.455661 | orchestrator | Wednesday 04 February 2026 00:26:24 +0000 (0:00:01.204) 0:01:06.887 **** 2026-02-04 00:28:40.455682 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:40.455703 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:40.455725 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:40.455745 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:40.455761 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:40.455774 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:40.455787 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:40.455800 | orchestrator | 2026-02-04 00:28:40.455813 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-04 00:28:40.455830 | orchestrator | Wednesday 04 February 2026 00:26:26 +0000 (0:00:01.741) 0:01:08.629 **** 2026-02-04 00:28:40.455843 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:40.455855 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.455873 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.455893 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.455913 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.455930 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.455950 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.455968 | orchestrator | 2026-02-04 00:28:40.455986 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-04 00:28:40.456022 | orchestrator | Wednesday 04 February 2026 00:26:29 +0000 (0:00:02.508) 0:01:11.137 **** 2026-02-04 00:28:40.456034 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:40.456045 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.456055 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.456066 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.456077 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.456087 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.456098 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.456109 | orchestrator | 2026-02-04 00:28:40.456120 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-04 00:28:40.456131 | orchestrator | Wednesday 04 February 2026 00:27:10 +0000 (0:00:41.133) 0:01:52.270 **** 2026-02-04 00:28:40.456141 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:40.456152 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:28:40.456164 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:28:40.456183 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:28:40.456202 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:28:40.456221 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:28:40.456240 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:28:40.456251 | orchestrator | 2026-02-04 00:28:40.456262 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-04 00:28:40.456273 | orchestrator | Wednesday 04 February 2026 00:28:25 +0000 (0:01:15.727) 0:03:07.997 **** 2026-02-04 00:28:40.456284 | orchestrator | ok: [testbed-manager] 2026-02-04 00:28:40.456295 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.456306 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.456317 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.456328 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.456338 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.456349 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.456360 | orchestrator | 2026-02-04 00:28:40.456398 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-04 00:28:40.456416 | orchestrator | Wednesday 04 February 2026 00:28:27 +0000 (0:00:01.719) 0:03:09.717 **** 2026-02-04 00:28:40.456427 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:28:40.456438 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:28:40.456449 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:28:40.456460 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:28:40.456470 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:28:40.456481 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:28:40.456492 | orchestrator | changed: [testbed-manager] 2026-02-04 00:28:40.456503 | orchestrator | 2026-02-04 00:28:40.456514 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-04 00:28:40.456525 | orchestrator | Wednesday 04 February 2026 00:28:38 +0000 (0:00:10.621) 0:03:20.339 **** 2026-02-04 00:28:40.456574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-04 00:28:40.456610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-04 00:28:40.456627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-04 00:28:40.456699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-04 00:28:40.456712 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-04 00:28:40.456723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-04 00:28:40.456734 | orchestrator | 2026-02-04 00:28:40.456746 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-04 00:28:40.456757 | orchestrator | Wednesday 04 February 2026 00:28:38 +0000 (0:00:00.402) 0:03:20.742 **** 2026-02-04 00:28:40.456768 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:28:40.456779 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:40.456790 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:28:40.456800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:40.456812 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:28:40.456822 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:40.456833 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 00:28:40.456844 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:40.456855 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:28:40.456866 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:28:40.456877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:28:40.456888 | orchestrator | 2026-02-04 00:28:40.456899 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-04 00:28:40.456910 | orchestrator | Wednesday 04 February 2026 00:28:40 +0000 (0:00:01.666) 0:03:22.408 **** 2026-02-04 00:28:40.456921 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:28:40.456938 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:28:40.456950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:28:40.456961 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:28:40.457001 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:28:40.457021 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:28:47.283359 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:28:47.283526 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:28:47.283544 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:28:47.283577 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:28:47.283589 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:28:47.283601 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:28:47.283612 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:28:47.283623 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:28:47.283633 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:28:47.283644 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:28:47.283655 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:28:47.283666 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:28:47.283677 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:28:47.283688 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:28:47.283699 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:28:47.283710 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:47.283723 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:28:47.283734 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:28:47.283745 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:28:47.283756 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:28:47.283766 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:28:47.283778 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:28:47.283789 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:28:47.283799 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:28:47.283810 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 00:28:47.283821 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:28:47.283831 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:28:47.283842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 00:28:47.283853 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 00:28:47.283864 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 00:28:47.283875 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 00:28:47.283886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 00:28:47.283898 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 00:28:47.283911 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 00:28:47.283925 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 00:28:47.283937 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 00:28:47.283958 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:28:47.283972 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:28:47.283985 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 00:28:47.284014 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 00:28:47.284028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 00:28:47.284041 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 00:28:47.284053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 00:28:47.284085 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 00:28:47.284097 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 00:28:47.284108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 00:28:47.284119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 00:28:47.284130 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 00:28:47.284141 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 00:28:47.284152 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 00:28:47.284162 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 00:28:47.284173 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 00:28:47.284184 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 00:28:47.284195 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 00:28:47.284206 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 00:28:47.284216 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 00:28:47.284227 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 00:28:47.284238 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 00:28:47.284249 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 00:28:47.284260 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 00:28:47.284271 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 00:28:47.284282 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 00:28:47.284293 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 00:28:47.284303 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 00:28:47.284314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 00:28:47.284325 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 00:28:47.284336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 00:28:47.284347 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 00:28:47.284359 | orchestrator | 2026-02-04 00:28:47.284370 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-04 00:28:47.284428 | orchestrator | Wednesday 04 February 2026 00:28:45 +0000 (0:00:04.743) 0:03:27.152 **** 2026-02-04 00:28:47.284441 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284452 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284463 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284495 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284506 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 00:28:47.284517 | orchestrator | 2026-02-04 00:28:47.284528 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-04 00:28:47.284539 | orchestrator | Wednesday 04 February 2026 00:28:46 +0000 (0:00:01.547) 0:03:28.699 **** 2026-02-04 00:28:47.284549 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:28:47.284560 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:28:47.284571 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:28:47.284588 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:28:47.284599 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:28:47.284610 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:28:47.284621 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:28:47.284632 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:28:47.284643 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:28:47.284654 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:28:47.284672 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:29:03.036406 | orchestrator | 2026-02-04 00:29:03.036566 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-04 00:29:03.036582 | orchestrator | Wednesday 04 February 2026 00:28:47 +0000 (0:00:00.603) 0:03:29.303 **** 2026-02-04 00:29:03.036592 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:29:03.036603 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:29:03.036614 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:29:03.036625 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:29:03.036634 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:29:03.036644 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 00:29:03.036654 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:29:03.036664 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:29:03.036674 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:29:03.036684 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:29:03.036693 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 00:29:03.036703 | orchestrator | 2026-02-04 00:29:03.036713 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-04 00:29:03.036746 | orchestrator | Wednesday 04 February 2026 00:28:47 +0000 (0:00:00.610) 0:03:29.913 **** 2026-02-04 00:29:03.036756 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:29:03.036766 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:29:03.036776 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:29:03.036786 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:29:03.036795 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:29:03.036805 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:29:03.036815 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 00:29:03.036824 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:29:03.036834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 00:29:03.036844 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 00:29:03.036853 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 00:29:03.036863 | orchestrator | 2026-02-04 00:29:03.036873 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-04 00:29:03.036883 | orchestrator | Wednesday 04 February 2026 00:28:49 +0000 (0:00:01.654) 0:03:31.568 **** 2026-02-04 00:29:03.036910 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:29:03.036930 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:29:03.036942 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:29:03.036954 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:29:03.036966 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:29:03.036977 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:29:03.036988 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:29:03.037000 | orchestrator | 2026-02-04 00:29:03.037012 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-04 00:29:03.037024 | orchestrator | Wednesday 04 February 2026 00:28:49 +0000 (0:00:00.311) 0:03:31.880 **** 2026-02-04 00:29:03.037036 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:29:03.037048 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:29:03.037060 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:29:03.037071 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:29:03.037082 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:29:03.037094 | orchestrator | ok: [testbed-manager] 2026-02-04 00:29:03.037105 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:29:03.037117 | orchestrator | 2026-02-04 00:29:03.037129 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-04 00:29:03.037140 | orchestrator | Wednesday 04 February 2026 00:28:55 +0000 (0:00:05.485) 0:03:37.365 **** 2026-02-04 00:29:03.037152 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-04 00:29:03.037201 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-04 00:29:03.037214 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:29:03.037226 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:29:03.037238 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-04 00:29:03.037250 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-04 00:29:03.037262 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:29:03.037273 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-04 00:29:03.037285 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:29:03.037316 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-04 00:29:03.037334 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:29:03.037354 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:29:03.037378 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-04 00:29:03.037394 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:29:03.037446 | orchestrator | 2026-02-04 00:29:03.037465 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-04 00:29:03.037494 | orchestrator | Wednesday 04 February 2026 00:28:55 +0000 (0:00:00.268) 0:03:37.634 **** 2026-02-04 00:29:03.037510 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-04 00:29:03.037525 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-04 00:29:03.037540 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-04 00:29:03.037580 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-04 00:29:03.037598 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-04 00:29:03.037614 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-04 00:29:03.037630 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-04 00:29:03.037645 | orchestrator | 2026-02-04 00:29:03.037660 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-04 00:29:03.037676 | orchestrator | Wednesday 04 February 2026 00:28:56 +0000 (0:00:01.018) 0:03:38.653 **** 2026-02-04 00:29:03.037694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:29:03.037712 | orchestrator | 2026-02-04 00:29:03.037726 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-04 00:29:03.037742 | orchestrator | Wednesday 04 February 2026 00:28:57 +0000 (0:00:00.505) 0:03:39.158 **** 2026-02-04 00:29:03.037757 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:29:03.037772 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:29:03.037788 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:29:03.037804 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:29:03.037820 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:29:03.037836 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:29:03.037852 | orchestrator | ok: [testbed-manager] 2026-02-04 00:29:03.037868 | orchestrator | 2026-02-04 00:29:03.037883 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-04 00:29:03.037898 | orchestrator | Wednesday 04 February 2026 00:28:59 +0000 (0:00:01.981) 0:03:41.139 **** 2026-02-04 00:29:03.037912 | orchestrator | ok: [testbed-manager] 2026-02-04 00:29:03.037926 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:29:03.037942 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:29:03.037956 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:29:03.037972 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:29:03.037987 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:29:03.038002 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:29:03.038101 | orchestrator | 2026-02-04 00:29:03.038126 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-04 00:29:03.038144 | orchestrator | Wednesday 04 February 2026 00:29:00 +0000 (0:00:01.407) 0:03:42.547 **** 2026-02-04 00:29:03.038161 | orchestrator | changed: [testbed-manager] 2026-02-04 00:29:03.038179 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:29:03.038196 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:29:03.038253 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:29:03.038272 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:29:03.038288 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:29:03.038304 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:29:03.038320 | orchestrator | 2026-02-04 00:29:03.038336 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-04 00:29:03.038351 | orchestrator | Wednesday 04 February 2026 00:29:01 +0000 (0:00:00.674) 0:03:43.221 **** 2026-02-04 00:29:03.038367 | orchestrator | ok: [testbed-manager] 2026-02-04 00:29:03.038384 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:29:03.038401 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:29:03.038447 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:29:03.038463 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:29:03.038479 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:29:03.038495 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:29:03.038512 | orchestrator | 2026-02-04 00:29:03.038528 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-04 00:29:03.038563 | orchestrator | Wednesday 04 February 2026 00:29:01 +0000 (0:00:00.780) 0:03:44.001 **** 2026-02-04 00:29:03.038586 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163512.480779, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:03.038610 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163531.5151384, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:03.038639 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163526.7809813, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:03.038681 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163528.852581, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889539 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163507.0641875, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889654 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163536.9298248, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889670 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770163525.2381217, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889704 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889714 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889737 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889747 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889783 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889793 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889803 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 00:29:07.889819 | orchestrator | 2026-02-04 00:29:07.889831 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-04 00:29:07.889841 | orchestrator | Wednesday 04 February 2026 00:29:03 +0000 (0:00:01.058) 0:03:45.060 **** 2026-02-04 00:29:07.889851 | orchestrator | changed: [testbed-manager] 2026-02-04 00:29:07.889861 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:29:07.889870 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:29:07.889878 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:29:07.889887 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:29:07.889896 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:29:07.889920 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:29:07.889938 | orchestrator | 2026-02-04 00:29:07.889948 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-04 00:29:07.889957 | orchestrator | Wednesday 04 February 2026 00:29:04 +0000 (0:00:01.059) 0:03:46.119 **** 2026-02-04 00:29:07.889966 | orchestrator | changed: [testbed-manager] 2026-02-04 00:29:07.889975 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:29:07.889983 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:29:07.889992 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:29:07.890001 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:29:07.890010 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:29:07.890076 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:29:07.890087 | orchestrator | 2026-02-04 00:29:07.890097 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-04 00:29:07.890106 | orchestrator | Wednesday 04 February 2026 00:29:05 +0000 (0:00:01.163) 0:03:47.283 **** 2026-02-04 00:29:07.890115 | orchestrator | changed: [testbed-manager] 2026-02-04 00:29:07.890124 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:29:07.890133 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:29:07.890142 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:29:07.890150 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:29:07.890198 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:29:07.890215 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:29:07.890224 | orchestrator | 2026-02-04 00:29:07.890233 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-04 00:29:07.890242 | orchestrator | Wednesday 04 February 2026 00:29:06 +0000 (0:00:01.148) 0:03:48.431 **** 2026-02-04 00:29:07.890256 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:29:07.890271 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:29:07.890286 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:29:07.890301 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:29:07.890309 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:29:07.890318 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:29:07.890327 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:29:07.890335 | orchestrator | 2026-02-04 00:29:07.890344 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-04 00:29:07.890353 | orchestrator | Wednesday 04 February 2026 00:29:06 +0000 (0:00:00.300) 0:03:48.731 **** 2026-02-04 00:29:07.890362 | orchestrator | ok: [testbed-manager] 2026-02-04 00:29:07.890371 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:29:07.890379 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:29:07.890388 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:29:07.890397 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:29:07.890405 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:29:07.890414 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:29:07.890446 | orchestrator | 2026-02-04 00:29:07.890456 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-04 00:29:07.890480 | orchestrator | Wednesday 04 February 2026 00:29:07 +0000 (0:00:00.729) 0:03:49.461 **** 2026-02-04 00:29:07.890491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:29:07.890511 | orchestrator | 2026-02-04 00:29:07.890520 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-04 00:29:07.890538 | orchestrator | Wednesday 04 February 2026 00:29:07 +0000 (0:00:00.454) 0:03:49.916 **** 2026-02-04 00:30:23.944643 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.944764 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:23.944781 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:23.944793 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:23.944805 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:23.944816 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:23.944827 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:23.944838 | orchestrator | 2026-02-04 00:30:23.944850 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-04 00:30:23.944862 | orchestrator | Wednesday 04 February 2026 00:29:16 +0000 (0:00:08.748) 0:03:58.664 **** 2026-02-04 00:30:23.944873 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.944884 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.944895 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.944906 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.944917 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.944928 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.944955 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.944966 | orchestrator | 2026-02-04 00:30:23.944989 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-04 00:30:23.945000 | orchestrator | Wednesday 04 February 2026 00:29:17 +0000 (0:00:01.323) 0:03:59.988 **** 2026-02-04 00:30:23.945011 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.945022 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.945033 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.945044 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.945055 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.945065 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.945076 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.945087 | orchestrator | 2026-02-04 00:30:23.945098 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-04 00:30:23.945110 | orchestrator | Wednesday 04 February 2026 00:29:19 +0000 (0:00:01.155) 0:04:01.144 **** 2026-02-04 00:30:23.945121 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.945135 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.945148 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.945161 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.945174 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.945187 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.945200 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.945213 | orchestrator | 2026-02-04 00:30:23.945226 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-04 00:30:23.945241 | orchestrator | Wednesday 04 February 2026 00:29:19 +0000 (0:00:00.288) 0:04:01.432 **** 2026-02-04 00:30:23.945254 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.945266 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.945277 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.945288 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.945299 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.945310 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.945320 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.945331 | orchestrator | 2026-02-04 00:30:23.945343 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-04 00:30:23.945354 | orchestrator | Wednesday 04 February 2026 00:29:19 +0000 (0:00:00.317) 0:04:01.750 **** 2026-02-04 00:30:23.945365 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.945376 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.945387 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.945398 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.945434 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.945445 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.945456 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.945467 | orchestrator | 2026-02-04 00:30:23.945478 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-04 00:30:23.945489 | orchestrator | Wednesday 04 February 2026 00:29:20 +0000 (0:00:00.289) 0:04:02.039 **** 2026-02-04 00:30:23.945500 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.945510 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.945521 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.945532 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.945604 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.945618 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.945629 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.945640 | orchestrator | 2026-02-04 00:30:23.945651 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-04 00:30:23.945662 | orchestrator | Wednesday 04 February 2026 00:29:24 +0000 (0:00:04.916) 0:04:06.956 **** 2026-02-04 00:30:23.945675 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:30:23.945689 | orchestrator | 2026-02-04 00:30:23.945700 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-04 00:30:23.945711 | orchestrator | Wednesday 04 February 2026 00:29:25 +0000 (0:00:00.384) 0:04:07.340 **** 2026-02-04 00:30:23.945723 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945733 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-04 00:30:23.945745 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945755 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-04 00:30:23.945766 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:23.945796 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945807 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:23.945818 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-04 00:30:23.945829 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945840 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-04 00:30:23.945851 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:23.945862 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945872 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:23.945883 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-04 00:30:23.945894 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945905 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-04 00:30:23.945934 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:23.945946 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:23.945957 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-04 00:30:23.945968 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-04 00:30:23.945979 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:23.945990 | orchestrator | 2026-02-04 00:30:23.946001 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-04 00:30:23.946012 | orchestrator | Wednesday 04 February 2026 00:29:25 +0000 (0:00:00.350) 0:04:07.691 **** 2026-02-04 00:30:23.946090 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:30:23.946103 | orchestrator | 2026-02-04 00:30:23.946114 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-04 00:30:23.946125 | orchestrator | Wednesday 04 February 2026 00:29:26 +0000 (0:00:00.372) 0:04:08.063 **** 2026-02-04 00:30:23.946146 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-04 00:30:23.946157 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-04 00:30:23.946174 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:23.946193 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:23.946212 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-04 00:30:23.946230 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-04 00:30:23.946248 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:23.946266 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-04 00:30:23.946284 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:23.946302 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-04 00:30:23.946321 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:23.946339 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:23.946359 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-04 00:30:23.946377 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:23.946396 | orchestrator | 2026-02-04 00:30:23.946408 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-04 00:30:23.946418 | orchestrator | Wednesday 04 February 2026 00:29:26 +0000 (0:00:00.282) 0:04:08.346 **** 2026-02-04 00:30:23.946430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:30:23.946441 | orchestrator | 2026-02-04 00:30:23.946452 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-04 00:30:23.946463 | orchestrator | Wednesday 04 February 2026 00:29:26 +0000 (0:00:00.421) 0:04:08.767 **** 2026-02-04 00:30:23.946474 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:23.946484 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:23.946495 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:23.946506 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:23.946517 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:23.946527 | orchestrator | changed: [testbed-manager] 2026-02-04 00:30:23.946538 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:23.946575 | orchestrator | 2026-02-04 00:30:23.946587 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-04 00:30:23.946598 | orchestrator | Wednesday 04 February 2026 00:29:59 +0000 (0:00:33.033) 0:04:41.801 **** 2026-02-04 00:30:23.946609 | orchestrator | changed: [testbed-manager] 2026-02-04 00:30:23.946620 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:23.946631 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:23.946642 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:23.946652 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:23.946663 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:23.946674 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:23.946685 | orchestrator | 2026-02-04 00:30:23.946696 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-04 00:30:23.946707 | orchestrator | Wednesday 04 February 2026 00:30:08 +0000 (0:00:08.674) 0:04:50.476 **** 2026-02-04 00:30:23.946725 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:23.946736 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:23.946747 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:23.946758 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:23.946769 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:23.946780 | orchestrator | changed: [testbed-manager] 2026-02-04 00:30:23.946790 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:23.946801 | orchestrator | 2026-02-04 00:30:23.946812 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-04 00:30:23.946823 | orchestrator | Wednesday 04 February 2026 00:30:16 +0000 (0:00:07.625) 0:04:58.102 **** 2026-02-04 00:30:23.946843 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:23.946854 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:23.946865 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:23.946876 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:23.946887 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:23.946898 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:23.946908 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:23.946919 | orchestrator | 2026-02-04 00:30:23.946930 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-04 00:30:23.946941 | orchestrator | Wednesday 04 February 2026 00:30:17 +0000 (0:00:01.748) 0:04:59.850 **** 2026-02-04 00:30:23.946952 | orchestrator | changed: [testbed-manager] 2026-02-04 00:30:23.946963 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:23.946974 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:23.946985 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:23.946996 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:23.947007 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:23.947017 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:23.947028 | orchestrator | 2026-02-04 00:30:23.947050 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-04 00:30:35.289011 | orchestrator | Wednesday 04 February 2026 00:30:23 +0000 (0:00:06.116) 0:05:05.966 **** 2026-02-04 00:30:35.289107 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:30:35.289118 | orchestrator | 2026-02-04 00:30:35.289123 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-04 00:30:35.289128 | orchestrator | Wednesday 04 February 2026 00:30:24 +0000 (0:00:00.539) 0:05:06.505 **** 2026-02-04 00:30:35.289132 | orchestrator | changed: [testbed-manager] 2026-02-04 00:30:35.289137 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:35.289141 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:35.289145 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:35.289150 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:35.289154 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:35.289158 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:35.289162 | orchestrator | 2026-02-04 00:30:35.289166 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-04 00:30:35.289170 | orchestrator | Wednesday 04 February 2026 00:30:25 +0000 (0:00:00.753) 0:05:07.259 **** 2026-02-04 00:30:35.289173 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:35.289178 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:35.289182 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:35.289186 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:35.289190 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:35.289194 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:35.289198 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:35.289201 | orchestrator | 2026-02-04 00:30:35.289205 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-04 00:30:35.289209 | orchestrator | Wednesday 04 February 2026 00:30:27 +0000 (0:00:01.887) 0:05:09.146 **** 2026-02-04 00:30:35.289213 | orchestrator | changed: [testbed-manager] 2026-02-04 00:30:35.289217 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:30:35.289221 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:30:35.289225 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:30:35.289228 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:30:35.289232 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:30:35.289236 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:30:35.289240 | orchestrator | 2026-02-04 00:30:35.289244 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-04 00:30:35.289248 | orchestrator | Wednesday 04 February 2026 00:30:27 +0000 (0:00:00.852) 0:05:09.998 **** 2026-02-04 00:30:35.289252 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:35.289271 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:35.289275 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:35.289279 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:35.289283 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:35.289287 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:35.289290 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:35.289294 | orchestrator | 2026-02-04 00:30:35.289298 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-04 00:30:35.289302 | orchestrator | Wednesday 04 February 2026 00:30:28 +0000 (0:00:00.254) 0:05:10.252 **** 2026-02-04 00:30:35.289306 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:35.289310 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:35.289313 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:35.289317 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:35.289321 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:35.289325 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:35.289329 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:35.289332 | orchestrator | 2026-02-04 00:30:35.289336 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-04 00:30:35.289340 | orchestrator | Wednesday 04 February 2026 00:30:28 +0000 (0:00:00.382) 0:05:10.634 **** 2026-02-04 00:30:35.289344 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:35.289348 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:35.289352 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:35.289356 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:35.289359 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:35.289363 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:35.289367 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:35.289371 | orchestrator | 2026-02-04 00:30:35.289375 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-04 00:30:35.289388 | orchestrator | Wednesday 04 February 2026 00:30:28 +0000 (0:00:00.259) 0:05:10.894 **** 2026-02-04 00:30:35.289392 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:35.289396 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:35.289400 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:35.289404 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:35.289407 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:35.289411 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:35.289415 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:35.289419 | orchestrator | 2026-02-04 00:30:35.289423 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-04 00:30:35.289428 | orchestrator | Wednesday 04 February 2026 00:30:29 +0000 (0:00:00.283) 0:05:11.177 **** 2026-02-04 00:30:35.289431 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:35.289435 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:35.289439 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:35.289443 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:35.289447 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:35.289450 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:35.289454 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:35.289458 | orchestrator | 2026-02-04 00:30:35.289462 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-04 00:30:35.289466 | orchestrator | Wednesday 04 February 2026 00:30:29 +0000 (0:00:00.284) 0:05:11.461 **** 2026-02-04 00:30:35.289470 | orchestrator | ok: [testbed-manager] =>  2026-02-04 00:30:35.289474 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289477 | orchestrator | ok: [testbed-node-3] =>  2026-02-04 00:30:35.289481 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289485 | orchestrator | ok: [testbed-node-4] =>  2026-02-04 00:30:35.289489 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289493 | orchestrator | ok: [testbed-node-5] =>  2026-02-04 00:30:35.289496 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289511 | orchestrator | ok: [testbed-node-0] =>  2026-02-04 00:30:35.289518 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289522 | orchestrator | ok: [testbed-node-1] =>  2026-02-04 00:30:35.289526 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289530 | orchestrator | ok: [testbed-node-2] =>  2026-02-04 00:30:35.289534 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 00:30:35.289537 | orchestrator | 2026-02-04 00:30:35.289541 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-04 00:30:35.289545 | orchestrator | Wednesday 04 February 2026 00:30:29 +0000 (0:00:00.287) 0:05:11.749 **** 2026-02-04 00:30:35.289549 | orchestrator | ok: [testbed-manager] =>  2026-02-04 00:30:35.289553 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289574 | orchestrator | ok: [testbed-node-3] =>  2026-02-04 00:30:35.289583 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289589 | orchestrator | ok: [testbed-node-4] =>  2026-02-04 00:30:35.289596 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289602 | orchestrator | ok: [testbed-node-5] =>  2026-02-04 00:30:35.289608 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289615 | orchestrator | ok: [testbed-node-0] =>  2026-02-04 00:30:35.289622 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289628 | orchestrator | ok: [testbed-node-1] =>  2026-02-04 00:30:35.289634 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289638 | orchestrator | ok: [testbed-node-2] =>  2026-02-04 00:30:35.289642 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 00:30:35.289647 | orchestrator | 2026-02-04 00:30:35.289651 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-04 00:30:35.289656 | orchestrator | Wednesday 04 February 2026 00:30:29 +0000 (0:00:00.284) 0:05:12.033 **** 2026-02-04 00:30:35.289660 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:35.289664 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:35.289669 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:35.289673 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:35.289677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:35.289681 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:35.289685 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:35.289690 | orchestrator | 2026-02-04 00:30:35.289694 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-04 00:30:35.289698 | orchestrator | Wednesday 04 February 2026 00:30:30 +0000 (0:00:00.256) 0:05:12.290 **** 2026-02-04 00:30:35.289703 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:35.289707 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:35.289711 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:35.289716 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:30:35.289765 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:30:35.289769 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:30:35.289773 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:30:35.289778 | orchestrator | 2026-02-04 00:30:35.289783 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-04 00:30:35.289787 | orchestrator | Wednesday 04 February 2026 00:30:30 +0000 (0:00:00.253) 0:05:12.544 **** 2026-02-04 00:30:35.289793 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:30:35.289799 | orchestrator | 2026-02-04 00:30:35.289804 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-04 00:30:35.289808 | orchestrator | Wednesday 04 February 2026 00:30:30 +0000 (0:00:00.391) 0:05:12.935 **** 2026-02-04 00:30:35.289813 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:35.289817 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:35.289821 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:35.289826 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:35.289830 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:35.289835 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:35.289886 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:35.289891 | orchestrator | 2026-02-04 00:30:35.289895 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-04 00:30:35.289900 | orchestrator | Wednesday 04 February 2026 00:30:31 +0000 (0:00:00.987) 0:05:13.922 **** 2026-02-04 00:30:35.289904 | orchestrator | ok: [testbed-manager] 2026-02-04 00:30:35.289909 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:30:35.289913 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:30:35.289918 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:30:35.289922 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:30:35.289927 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:30:35.289936 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:30:35.289940 | orchestrator | 2026-02-04 00:30:35.289945 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-04 00:30:35.289950 | orchestrator | Wednesday 04 February 2026 00:30:34 +0000 (0:00:03.013) 0:05:16.935 **** 2026-02-04 00:30:35.289955 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-04 00:30:35.289960 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-04 00:30:35.289964 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-04 00:30:35.289969 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-04 00:30:35.289973 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-04 00:30:35.289978 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-04 00:30:35.289982 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:30:35.289986 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-04 00:30:35.289991 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-04 00:30:35.289997 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:30:35.290003 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-04 00:30:35.290009 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-04 00:30:35.290062 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-04 00:30:35.290070 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-04 00:30:35.290076 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:30:35.290083 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-04 00:30:35.290094 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-04 00:31:36.405979 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-04 00:31:36.406127 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:36.406146 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-04 00:31:36.406159 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-04 00:31:36.406171 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:36.406182 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-04 00:31:36.406193 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:36.406205 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-04 00:31:36.406217 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-04 00:31:36.406228 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-04 00:31:36.406241 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:36.406249 | orchestrator | 2026-02-04 00:31:36.406256 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-04 00:31:36.406265 | orchestrator | Wednesday 04 February 2026 00:30:35 +0000 (0:00:00.587) 0:05:17.523 **** 2026-02-04 00:31:36.406278 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.406289 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.406301 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.406311 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.406322 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.406335 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.406347 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.406384 | orchestrator | 2026-02-04 00:31:36.406396 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-04 00:31:36.406408 | orchestrator | Wednesday 04 February 2026 00:30:42 +0000 (0:00:06.808) 0:05:24.332 **** 2026-02-04 00:31:36.406419 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.406430 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.406441 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.406451 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.406462 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.406473 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.406484 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.406495 | orchestrator | 2026-02-04 00:31:36.406506 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-04 00:31:36.406517 | orchestrator | Wednesday 04 February 2026 00:30:43 +0000 (0:00:01.071) 0:05:25.403 **** 2026-02-04 00:31:36.406529 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.406539 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.406550 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.406561 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.406572 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.406582 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.406593 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.406604 | orchestrator | 2026-02-04 00:31:36.406707 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-04 00:31:36.406720 | orchestrator | Wednesday 04 February 2026 00:30:51 +0000 (0:00:08.471) 0:05:33.875 **** 2026-02-04 00:31:36.406732 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.406744 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:36.406756 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.406767 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.406778 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.406789 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.406801 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.406812 | orchestrator | 2026-02-04 00:31:36.406824 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-04 00:31:36.406837 | orchestrator | Wednesday 04 February 2026 00:30:54 +0000 (0:00:03.143) 0:05:37.018 **** 2026-02-04 00:31:36.406848 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.406860 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.406872 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.406884 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.406895 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.406906 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.406917 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.406927 | orchestrator | 2026-02-04 00:31:36.406938 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-04 00:31:36.406950 | orchestrator | Wednesday 04 February 2026 00:30:56 +0000 (0:00:01.258) 0:05:38.276 **** 2026-02-04 00:31:36.406961 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.406973 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.406984 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.406995 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.407007 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.407018 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.407030 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.407041 | orchestrator | 2026-02-04 00:31:36.407052 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-04 00:31:36.407062 | orchestrator | Wednesday 04 February 2026 00:30:57 +0000 (0:00:01.387) 0:05:39.664 **** 2026-02-04 00:31:36.407074 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:36.407085 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:36.407096 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:36.407108 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:36.407119 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:36.407196 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:36.407210 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:36.407222 | orchestrator | 2026-02-04 00:31:36.407235 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-04 00:31:36.407246 | orchestrator | Wednesday 04 February 2026 00:30:58 +0000 (0:00:00.549) 0:05:40.213 **** 2026-02-04 00:31:36.407258 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.407270 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.407281 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.407294 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.407305 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.407317 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.407328 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.407340 | orchestrator | 2026-02-04 00:31:36.407352 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-04 00:31:36.407386 | orchestrator | Wednesday 04 February 2026 00:31:07 +0000 (0:00:09.795) 0:05:50.009 **** 2026-02-04 00:31:36.407398 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.407409 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.407421 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.407432 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.407444 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.407455 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.407467 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:36.407478 | orchestrator | 2026-02-04 00:31:36.407490 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-04 00:31:36.407503 | orchestrator | Wednesday 04 February 2026 00:31:09 +0000 (0:00:01.450) 0:05:51.459 **** 2026-02-04 00:31:36.407510 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.407517 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.407524 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.407531 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.407537 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.407544 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.407551 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.407558 | orchestrator | 2026-02-04 00:31:36.407564 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-04 00:31:36.407571 | orchestrator | Wednesday 04 February 2026 00:31:18 +0000 (0:00:08.888) 0:06:00.348 **** 2026-02-04 00:31:36.407578 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.407585 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.407592 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.407598 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.407605 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.407611 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.407618 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.407624 | orchestrator | 2026-02-04 00:31:36.407631 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-04 00:31:36.407690 | orchestrator | Wednesday 04 February 2026 00:31:29 +0000 (0:00:11.245) 0:06:11.594 **** 2026-02-04 00:31:36.407704 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-04 00:31:36.407716 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-04 00:31:36.407725 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-04 00:31:36.407736 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-04 00:31:36.407746 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-04 00:31:36.407756 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-04 00:31:36.407765 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-04 00:31:36.407774 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-04 00:31:36.407785 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-04 00:31:36.407794 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-04 00:31:36.407817 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-04 00:31:36.407908 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-04 00:31:36.407925 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-04 00:31:36.407932 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-04 00:31:36.407939 | orchestrator | 2026-02-04 00:31:36.407946 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-04 00:31:36.407953 | orchestrator | Wednesday 04 February 2026 00:31:30 +0000 (0:00:01.287) 0:06:12.882 **** 2026-02-04 00:31:36.407963 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:36.407975 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:36.407987 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:36.407998 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:36.408009 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:36.408020 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:36.408030 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:36.408041 | orchestrator | 2026-02-04 00:31:36.408053 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-04 00:31:36.408065 | orchestrator | Wednesday 04 February 2026 00:31:31 +0000 (0:00:00.527) 0:06:13.409 **** 2026-02-04 00:31:36.408076 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:36.408087 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:36.408098 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:36.408109 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:36.408119 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:36.408129 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:36.408140 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:36.408150 | orchestrator | 2026-02-04 00:31:36.408165 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-04 00:31:36.408177 | orchestrator | Wednesday 04 February 2026 00:31:35 +0000 (0:00:04.096) 0:06:17.505 **** 2026-02-04 00:31:36.408188 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:36.408199 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:36.408210 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:36.408220 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:36.408231 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:36.408243 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:36.408335 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:36.408347 | orchestrator | 2026-02-04 00:31:36.408369 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-04 00:31:36.408376 | orchestrator | Wednesday 04 February 2026 00:31:35 +0000 (0:00:00.496) 0:06:18.002 **** 2026-02-04 00:31:36.408383 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-04 00:31:36.408390 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-04 00:31:36.408397 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:36.408404 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-04 00:31:36.408411 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-04 00:31:36.408417 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:36.408424 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-04 00:31:36.408430 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-04 00:31:36.408437 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:36.408457 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-04 00:31:55.302726 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-04 00:31:55.302898 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:55.302914 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-04 00:31:55.302925 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-04 00:31:55.302936 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:55.302946 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-04 00:31:55.302979 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-04 00:31:55.302990 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:55.303000 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-04 00:31:55.303009 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-04 00:31:55.303019 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:55.303032 | orchestrator | 2026-02-04 00:31:55.303051 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-04 00:31:55.303063 | orchestrator | Wednesday 04 February 2026 00:31:36 +0000 (0:00:00.669) 0:06:18.671 **** 2026-02-04 00:31:55.303073 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:55.303084 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:55.303100 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:55.303111 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:55.303122 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:55.303137 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:55.303147 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:55.303157 | orchestrator | 2026-02-04 00:31:55.303167 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-04 00:31:55.303177 | orchestrator | Wednesday 04 February 2026 00:31:37 +0000 (0:00:00.487) 0:06:19.158 **** 2026-02-04 00:31:55.303187 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:55.303200 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:55.303216 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:55.303232 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:55.303246 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:55.303259 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:55.303286 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:55.303303 | orchestrator | 2026-02-04 00:31:55.303318 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-04 00:31:55.303334 | orchestrator | Wednesday 04 February 2026 00:31:37 +0000 (0:00:00.466) 0:06:19.625 **** 2026-02-04 00:31:55.303350 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:55.303365 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:31:55.303380 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:31:55.303395 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:31:55.303412 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:31:55.303428 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:31:55.303444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:31:55.303460 | orchestrator | 2026-02-04 00:31:55.303477 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-04 00:31:55.303489 | orchestrator | Wednesday 04 February 2026 00:31:38 +0000 (0:00:00.531) 0:06:20.157 **** 2026-02-04 00:31:55.303499 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.303509 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:55.303519 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:55.303528 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:55.303538 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:55.303548 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:55.303557 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:55.303567 | orchestrator | 2026-02-04 00:31:55.303577 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-04 00:31:55.303587 | orchestrator | Wednesday 04 February 2026 00:31:40 +0000 (0:00:01.962) 0:06:22.119 **** 2026-02-04 00:31:55.303598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:31:55.303618 | orchestrator | 2026-02-04 00:31:55.303629 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-04 00:31:55.303639 | orchestrator | Wednesday 04 February 2026 00:31:40 +0000 (0:00:00.804) 0:06:22.924 **** 2026-02-04 00:31:55.303691 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.303703 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:55.303713 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:55.303723 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:55.303732 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:55.303744 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:55.303760 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:55.303770 | orchestrator | 2026-02-04 00:31:55.303780 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-04 00:31:55.303790 | orchestrator | Wednesday 04 February 2026 00:31:41 +0000 (0:00:00.850) 0:06:23.775 **** 2026-02-04 00:31:55.303800 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.303810 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:55.303819 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:55.303829 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:55.303839 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:55.303848 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:55.303858 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:55.303868 | orchestrator | 2026-02-04 00:31:55.303877 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-04 00:31:55.303887 | orchestrator | Wednesday 04 February 2026 00:31:42 +0000 (0:00:00.814) 0:06:24.589 **** 2026-02-04 00:31:55.303897 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.303906 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:55.303916 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:55.303926 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:55.303935 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:55.303945 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:55.303954 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:55.303964 | orchestrator | 2026-02-04 00:31:55.303974 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-04 00:31:55.304021 | orchestrator | Wednesday 04 February 2026 00:31:44 +0000 (0:00:01.486) 0:06:26.076 **** 2026-02-04 00:31:55.304039 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:31:55.304050 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:55.304060 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:55.304070 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:55.304080 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:55.304089 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:55.304099 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:55.304109 | orchestrator | 2026-02-04 00:31:55.304119 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-04 00:31:55.304128 | orchestrator | Wednesday 04 February 2026 00:31:45 +0000 (0:00:01.390) 0:06:27.466 **** 2026-02-04 00:31:55.304138 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.304148 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:55.304158 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:55.304167 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:55.304177 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:55.304187 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:55.304196 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:55.304206 | orchestrator | 2026-02-04 00:31:55.304216 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-04 00:31:55.304226 | orchestrator | Wednesday 04 February 2026 00:31:46 +0000 (0:00:01.365) 0:06:28.832 **** 2026-02-04 00:31:55.304235 | orchestrator | changed: [testbed-manager] 2026-02-04 00:31:55.304245 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:31:55.304255 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:31:55.304264 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:31:55.304274 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:31:55.304287 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:31:55.304304 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:31:55.304320 | orchestrator | 2026-02-04 00:31:55.304336 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-04 00:31:55.304363 | orchestrator | Wednesday 04 February 2026 00:31:48 +0000 (0:00:01.325) 0:06:30.157 **** 2026-02-04 00:31:55.304381 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:31:55.304399 | orchestrator | 2026-02-04 00:31:55.304416 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-04 00:31:55.304433 | orchestrator | Wednesday 04 February 2026 00:31:49 +0000 (0:00:01.036) 0:06:31.193 **** 2026-02-04 00:31:55.304443 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.304453 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:55.304463 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:55.304472 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:55.304482 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:55.304492 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:55.304501 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:55.304511 | orchestrator | 2026-02-04 00:31:55.304521 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-04 00:31:55.304530 | orchestrator | Wednesday 04 February 2026 00:31:50 +0000 (0:00:01.368) 0:06:32.562 **** 2026-02-04 00:31:55.304541 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.304557 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:55.304580 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:55.304597 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:55.304611 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:55.304626 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:55.304641 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:55.304655 | orchestrator | 2026-02-04 00:31:55.304701 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-04 00:31:55.304717 | orchestrator | Wednesday 04 February 2026 00:31:51 +0000 (0:00:01.110) 0:06:33.673 **** 2026-02-04 00:31:55.304732 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.304747 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:55.304763 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:55.304778 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:55.304794 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:55.304810 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:55.304824 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:55.304838 | orchestrator | 2026-02-04 00:31:55.304852 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-04 00:31:55.304867 | orchestrator | Wednesday 04 February 2026 00:31:52 +0000 (0:00:01.105) 0:06:34.779 **** 2026-02-04 00:31:55.304882 | orchestrator | ok: [testbed-manager] 2026-02-04 00:31:55.304897 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:31:55.304931 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:31:55.304948 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:31:55.304964 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:31:55.304981 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:31:55.304997 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:31:55.305013 | orchestrator | 2026-02-04 00:31:55.305028 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-04 00:31:55.305044 | orchestrator | Wednesday 04 February 2026 00:31:54 +0000 (0:00:01.328) 0:06:36.107 **** 2026-02-04 00:31:55.305062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:31:55.305079 | orchestrator | 2026-02-04 00:31:55.305095 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:31:55.305106 | orchestrator | Wednesday 04 February 2026 00:31:54 +0000 (0:00:00.915) 0:06:37.022 **** 2026-02-04 00:31:55.305121 | orchestrator | 2026-02-04 00:31:55.305137 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:31:55.305153 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.041) 0:06:37.063 **** 2026-02-04 00:31:55.305185 | orchestrator | 2026-02-04 00:31:55.305203 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:31:55.305221 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.040) 0:06:37.103 **** 2026-02-04 00:31:55.305236 | orchestrator | 2026-02-04 00:31:55.305252 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:31:55.305275 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.047) 0:06:37.151 **** 2026-02-04 00:32:21.251828 | orchestrator | 2026-02-04 00:32:21.251934 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:32:21.251947 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.039) 0:06:37.190 **** 2026-02-04 00:32:21.251955 | orchestrator | 2026-02-04 00:32:21.251962 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:32:21.251968 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.039) 0:06:37.230 **** 2026-02-04 00:32:21.251974 | orchestrator | 2026-02-04 00:32:21.251980 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 00:32:21.251987 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.045) 0:06:37.276 **** 2026-02-04 00:32:21.251994 | orchestrator | 2026-02-04 00:32:21.252000 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 00:32:21.252007 | orchestrator | Wednesday 04 February 2026 00:31:55 +0000 (0:00:00.039) 0:06:37.315 **** 2026-02-04 00:32:21.252013 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:21.252021 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:21.252027 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:21.252033 | orchestrator | 2026-02-04 00:32:21.252040 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-04 00:32:21.252047 | orchestrator | Wednesday 04 February 2026 00:31:56 +0000 (0:00:01.209) 0:06:38.525 **** 2026-02-04 00:32:21.252053 | orchestrator | changed: [testbed-manager] 2026-02-04 00:32:21.252060 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:21.252066 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:21.252072 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:21.252079 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:21.252085 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:21.252091 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:21.252097 | orchestrator | 2026-02-04 00:32:21.252103 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-04 00:32:21.252109 | orchestrator | Wednesday 04 February 2026 00:31:57 +0000 (0:00:01.466) 0:06:39.992 **** 2026-02-04 00:32:21.252115 | orchestrator | changed: [testbed-manager] 2026-02-04 00:32:21.252121 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:21.252147 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:21.252155 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:21.252163 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:21.252169 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:21.252175 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:21.252181 | orchestrator | 2026-02-04 00:32:21.252187 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-04 00:32:21.252193 | orchestrator | Wednesday 04 February 2026 00:31:59 +0000 (0:00:01.177) 0:06:41.170 **** 2026-02-04 00:32:21.252199 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:21.252205 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:21.252211 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:21.252217 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:21.252223 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:21.252229 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:21.252234 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:21.252240 | orchestrator | 2026-02-04 00:32:21.252246 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-04 00:32:21.252252 | orchestrator | Wednesday 04 February 2026 00:32:01 +0000 (0:00:02.235) 0:06:43.405 **** 2026-02-04 00:32:21.252284 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:21.252290 | orchestrator | 2026-02-04 00:32:21.252297 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-04 00:32:21.252303 | orchestrator | Wednesday 04 February 2026 00:32:01 +0000 (0:00:00.110) 0:06:43.516 **** 2026-02-04 00:32:21.252310 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:21.252316 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:21.252323 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:21.252329 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:21.252336 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:21.252342 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:21.252348 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:21.252355 | orchestrator | 2026-02-04 00:32:21.252361 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-04 00:32:21.252370 | orchestrator | Wednesday 04 February 2026 00:32:02 +0000 (0:00:01.090) 0:06:44.606 **** 2026-02-04 00:32:21.252376 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:21.252383 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:21.252403 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:21.252410 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:21.252416 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:21.252422 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:21.252429 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:21.252436 | orchestrator | 2026-02-04 00:32:21.252443 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-04 00:32:21.252449 | orchestrator | Wednesday 04 February 2026 00:32:03 +0000 (0:00:00.500) 0:06:45.106 **** 2026-02-04 00:32:21.252457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:32:21.252465 | orchestrator | 2026-02-04 00:32:21.252473 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-04 00:32:21.252479 | orchestrator | Wednesday 04 February 2026 00:32:04 +0000 (0:00:01.088) 0:06:46.195 **** 2026-02-04 00:32:21.252485 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:21.252491 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:21.252496 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:21.252503 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:21.252509 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:21.252516 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:21.252522 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:21.252528 | orchestrator | 2026-02-04 00:32:21.252534 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-04 00:32:21.252541 | orchestrator | Wednesday 04 February 2026 00:32:05 +0000 (0:00:00.874) 0:06:47.070 **** 2026-02-04 00:32:21.252547 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-04 00:32:21.252572 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-04 00:32:21.252580 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-04 00:32:21.252586 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-04 00:32:21.252592 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-04 00:32:21.252598 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-04 00:32:21.252604 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-04 00:32:21.252611 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-04 00:32:21.252617 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-04 00:32:21.252623 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-04 00:32:21.252630 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-04 00:32:21.252635 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-04 00:32:21.252639 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-04 00:32:21.252652 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-04 00:32:21.252657 | orchestrator | 2026-02-04 00:32:21.252662 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-04 00:32:21.252666 | orchestrator | Wednesday 04 February 2026 00:32:07 +0000 (0:00:02.321) 0:06:49.391 **** 2026-02-04 00:32:21.252670 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:21.252675 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:21.252679 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:21.252683 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:21.252687 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:21.252715 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:21.252719 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:21.252724 | orchestrator | 2026-02-04 00:32:21.252728 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-04 00:32:21.252733 | orchestrator | Wednesday 04 February 2026 00:32:08 +0000 (0:00:00.650) 0:06:50.042 **** 2026-02-04 00:32:21.252740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:32:21.252746 | orchestrator | 2026-02-04 00:32:21.252751 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-04 00:32:21.252756 | orchestrator | Wednesday 04 February 2026 00:32:08 +0000 (0:00:00.854) 0:06:50.896 **** 2026-02-04 00:32:21.252760 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:21.252764 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:21.252767 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:21.252771 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:21.252775 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:21.252779 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:21.252782 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:21.252786 | orchestrator | 2026-02-04 00:32:21.252790 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-04 00:32:21.252794 | orchestrator | Wednesday 04 February 2026 00:32:09 +0000 (0:00:00.826) 0:06:51.723 **** 2026-02-04 00:32:21.252798 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:21.252801 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:21.252805 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:21.252809 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:21.252813 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:21.252816 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:21.252820 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:21.252824 | orchestrator | 2026-02-04 00:32:21.252828 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-04 00:32:21.252832 | orchestrator | Wednesday 04 February 2026 00:32:10 +0000 (0:00:01.033) 0:06:52.756 **** 2026-02-04 00:32:21.252835 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:21.252839 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:21.252843 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:21.252848 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:21.252855 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:21.252860 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:21.252864 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:21.252868 | orchestrator | 2026-02-04 00:32:21.252872 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-04 00:32:21.252876 | orchestrator | Wednesday 04 February 2026 00:32:11 +0000 (0:00:00.537) 0:06:53.294 **** 2026-02-04 00:32:21.252880 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:21.252884 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:21.252888 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:21.252892 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:21.252896 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:21.252899 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:21.252910 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:21.252914 | orchestrator | 2026-02-04 00:32:21.252918 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-04 00:32:21.252922 | orchestrator | Wednesday 04 February 2026 00:32:12 +0000 (0:00:01.593) 0:06:54.888 **** 2026-02-04 00:32:21.252925 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:21.252929 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:21.252933 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:21.252937 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:21.252940 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:21.252944 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:21.252948 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:21.252952 | orchestrator | 2026-02-04 00:32:21.252955 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-04 00:32:21.252959 | orchestrator | Wednesday 04 February 2026 00:32:13 +0000 (0:00:00.499) 0:06:55.388 **** 2026-02-04 00:32:21.252963 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:21.252967 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:21.252971 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:21.252974 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:21.252978 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:21.252982 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:21.252991 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:53.509900 | orchestrator | 2026-02-04 00:32:53.510001 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-04 00:32:53.510011 | orchestrator | Wednesday 04 February 2026 00:32:21 +0000 (0:00:07.882) 0:07:03.271 **** 2026-02-04 00:32:53.510068 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510076 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:53.510083 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:53.510090 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:53.510096 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:53.510102 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:53.510108 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:53.510114 | orchestrator | 2026-02-04 00:32:53.510121 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-04 00:32:53.510127 | orchestrator | Wednesday 04 February 2026 00:32:22 +0000 (0:00:01.553) 0:07:04.824 **** 2026-02-04 00:32:53.510133 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510138 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:53.510145 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:53.510150 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:53.510157 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:53.510163 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:53.510170 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:53.510176 | orchestrator | 2026-02-04 00:32:53.510182 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-04 00:32:53.510189 | orchestrator | Wednesday 04 February 2026 00:32:24 +0000 (0:00:01.831) 0:07:06.655 **** 2026-02-04 00:32:53.510194 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510201 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:53.510207 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:53.510213 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:53.510219 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:53.510225 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:53.510231 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:53.510237 | orchestrator | 2026-02-04 00:32:53.510243 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 00:32:53.510249 | orchestrator | Wednesday 04 February 2026 00:32:26 +0000 (0:00:01.627) 0:07:08.283 **** 2026-02-04 00:32:53.510256 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510262 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510268 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510274 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510301 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510307 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510313 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510319 | orchestrator | 2026-02-04 00:32:53.510325 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 00:32:53.510331 | orchestrator | Wednesday 04 February 2026 00:32:27 +0000 (0:00:00.862) 0:07:09.145 **** 2026-02-04 00:32:53.510337 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:53.510343 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:53.510349 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:53.510355 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:53.510361 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:53.510367 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:53.510373 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:53.510379 | orchestrator | 2026-02-04 00:32:53.510385 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-04 00:32:53.510391 | orchestrator | Wednesday 04 February 2026 00:32:28 +0000 (0:00:01.089) 0:07:10.235 **** 2026-02-04 00:32:53.510397 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:53.510403 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:53.510408 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:53.510414 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:53.510420 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:53.510426 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:53.510432 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:53.510437 | orchestrator | 2026-02-04 00:32:53.510443 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-04 00:32:53.510449 | orchestrator | Wednesday 04 February 2026 00:32:28 +0000 (0:00:00.553) 0:07:10.789 **** 2026-02-04 00:32:53.510455 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510476 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510482 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510488 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510494 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510499 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510505 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510511 | orchestrator | 2026-02-04 00:32:53.510520 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-04 00:32:53.510526 | orchestrator | Wednesday 04 February 2026 00:32:29 +0000 (0:00:00.485) 0:07:11.274 **** 2026-02-04 00:32:53.510532 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510538 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510543 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510549 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510555 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510561 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510567 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510573 | orchestrator | 2026-02-04 00:32:53.510579 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-04 00:32:53.510585 | orchestrator | Wednesday 04 February 2026 00:32:29 +0000 (0:00:00.575) 0:07:11.850 **** 2026-02-04 00:32:53.510590 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510596 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510602 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510607 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510613 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510619 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510625 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510631 | orchestrator | 2026-02-04 00:32:53.510637 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-04 00:32:53.510643 | orchestrator | Wednesday 04 February 2026 00:32:30 +0000 (0:00:00.700) 0:07:12.550 **** 2026-02-04 00:32:53.510648 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510654 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510659 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510670 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510676 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510682 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510687 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510693 | orchestrator | 2026-02-04 00:32:53.510716 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-04 00:32:53.510739 | orchestrator | Wednesday 04 February 2026 00:32:35 +0000 (0:00:05.133) 0:07:17.684 **** 2026-02-04 00:32:53.510747 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:32:53.510754 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:32:53.510759 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:32:53.510765 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:32:53.510770 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:32:53.510776 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:32:53.510781 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:32:53.510787 | orchestrator | 2026-02-04 00:32:53.510792 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-04 00:32:53.510798 | orchestrator | Wednesday 04 February 2026 00:32:36 +0000 (0:00:00.486) 0:07:18.171 **** 2026-02-04 00:32:53.510806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:32:53.510813 | orchestrator | 2026-02-04 00:32:53.510819 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-04 00:32:53.510824 | orchestrator | Wednesday 04 February 2026 00:32:37 +0000 (0:00:00.942) 0:07:19.113 **** 2026-02-04 00:32:53.510829 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510835 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510841 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510846 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510851 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510857 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510863 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510868 | orchestrator | 2026-02-04 00:32:53.510874 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-04 00:32:53.510879 | orchestrator | Wednesday 04 February 2026 00:32:39 +0000 (0:00:01.938) 0:07:21.052 **** 2026-02-04 00:32:53.510885 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510890 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510896 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510902 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510907 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510913 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510918 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510924 | orchestrator | 2026-02-04 00:32:53.510929 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-04 00:32:53.510935 | orchestrator | Wednesday 04 February 2026 00:32:40 +0000 (0:00:01.102) 0:07:22.154 **** 2026-02-04 00:32:53.510940 | orchestrator | ok: [testbed-manager] 2026-02-04 00:32:53.510946 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:32:53.510951 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:32:53.510957 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:32:53.510962 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:32:53.510968 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:32:53.510973 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:32:53.510978 | orchestrator | 2026-02-04 00:32:53.510984 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-04 00:32:53.510990 | orchestrator | Wednesday 04 February 2026 00:32:40 +0000 (0:00:00.817) 0:07:22.972 **** 2026-02-04 00:32:53.510995 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511003 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511013 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511019 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511025 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511033 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511039 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 00:32:53.511044 | orchestrator | 2026-02-04 00:32:53.511050 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-04 00:32:53.511055 | orchestrator | Wednesday 04 February 2026 00:32:42 +0000 (0:00:01.829) 0:07:24.801 **** 2026-02-04 00:32:53.511061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:32:53.511067 | orchestrator | 2026-02-04 00:32:53.511073 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-04 00:32:53.511078 | orchestrator | Wednesday 04 February 2026 00:32:43 +0000 (0:00:00.762) 0:07:25.564 **** 2026-02-04 00:32:53.511084 | orchestrator | changed: [testbed-manager] 2026-02-04 00:32:53.511089 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:32:53.511095 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:32:53.511101 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:32:53.511106 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:32:53.511112 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:32:53.511118 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:32:53.511123 | orchestrator | 2026-02-04 00:32:53.511134 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-04 00:33:24.690841 | orchestrator | Wednesday 04 February 2026 00:32:53 +0000 (0:00:09.962) 0:07:35.527 **** 2026-02-04 00:33:24.690953 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:24.690975 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:24.690990 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:24.691006 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:24.691020 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:24.691034 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:24.691048 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:24.691063 | orchestrator | 2026-02-04 00:33:24.691078 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-04 00:33:24.691093 | orchestrator | Wednesday 04 February 2026 00:32:56 +0000 (0:00:02.542) 0:07:38.070 **** 2026-02-04 00:33:24.691110 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:24.691126 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:24.691141 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:24.691151 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:24.691160 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:24.691169 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:24.691178 | orchestrator | 2026-02-04 00:33:24.691187 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-04 00:33:24.691196 | orchestrator | Wednesday 04 February 2026 00:32:57 +0000 (0:00:01.340) 0:07:39.410 **** 2026-02-04 00:33:24.691205 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.691215 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.691225 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.691239 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.691253 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.691296 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.691311 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.691324 | orchestrator | 2026-02-04 00:33:24.691333 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-04 00:33:24.691341 | orchestrator | 2026-02-04 00:33:24.691351 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-04 00:33:24.691359 | orchestrator | Wednesday 04 February 2026 00:32:58 +0000 (0:00:01.277) 0:07:40.687 **** 2026-02-04 00:33:24.691368 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:24.691377 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:24.691385 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:24.691394 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:24.691403 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:24.691411 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:24.691420 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:24.691428 | orchestrator | 2026-02-04 00:33:24.691437 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-04 00:33:24.691446 | orchestrator | 2026-02-04 00:33:24.691455 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-04 00:33:24.691464 | orchestrator | Wednesday 04 February 2026 00:32:59 +0000 (0:00:00.679) 0:07:41.367 **** 2026-02-04 00:33:24.691472 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.691481 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.691490 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.691498 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.691507 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.691515 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.691524 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.691532 | orchestrator | 2026-02-04 00:33:24.691541 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-04 00:33:24.691550 | orchestrator | Wednesday 04 February 2026 00:33:00 +0000 (0:00:01.306) 0:07:42.673 **** 2026-02-04 00:33:24.691558 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:24.691567 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:24.691576 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:24.691584 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:24.691593 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:24.691602 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:24.691610 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:24.691619 | orchestrator | 2026-02-04 00:33:24.691628 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-04 00:33:24.691637 | orchestrator | Wednesday 04 February 2026 00:33:02 +0000 (0:00:01.442) 0:07:44.116 **** 2026-02-04 00:33:24.691645 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:33:24.691654 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:33:24.691663 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:33:24.691671 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:33:24.691680 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:33:24.691689 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:33:24.691718 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:33:24.691734 | orchestrator | 2026-02-04 00:33:24.691748 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-04 00:33:24.691786 | orchestrator | Wednesday 04 February 2026 00:33:02 +0000 (0:00:00.582) 0:07:44.698 **** 2026-02-04 00:33:24.691796 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:24.691807 | orchestrator | 2026-02-04 00:33:24.691816 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-04 00:33:24.691825 | orchestrator | Wednesday 04 February 2026 00:33:03 +0000 (0:00:00.967) 0:07:45.666 **** 2026-02-04 00:33:24.691836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:24.691855 | orchestrator | 2026-02-04 00:33:24.691864 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-04 00:33:24.691873 | orchestrator | Wednesday 04 February 2026 00:33:04 +0000 (0:00:00.795) 0:07:46.461 **** 2026-02-04 00:33:24.691882 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.691890 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.691899 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.691907 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.691916 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.691925 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.691933 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.691942 | orchestrator | 2026-02-04 00:33:24.691968 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-04 00:33:24.691978 | orchestrator | Wednesday 04 February 2026 00:33:13 +0000 (0:00:08.931) 0:07:55.392 **** 2026-02-04 00:33:24.691987 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.691995 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.692004 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.692012 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.692021 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.692029 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.692038 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.692047 | orchestrator | 2026-02-04 00:33:24.692055 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-04 00:33:24.692064 | orchestrator | Wednesday 04 February 2026 00:33:14 +0000 (0:00:01.037) 0:07:56.430 **** 2026-02-04 00:33:24.692073 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.692081 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.692090 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.692099 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.692107 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.692116 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.692124 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.692133 | orchestrator | 2026-02-04 00:33:24.692142 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-04 00:33:24.692155 | orchestrator | Wednesday 04 February 2026 00:33:15 +0000 (0:00:01.322) 0:07:57.752 **** 2026-02-04 00:33:24.692170 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.692184 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.692199 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.692212 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.692226 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.692238 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.692254 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.692268 | orchestrator | 2026-02-04 00:33:24.692301 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-04 00:33:24.692329 | orchestrator | Wednesday 04 February 2026 00:33:17 +0000 (0:00:01.847) 0:07:59.600 **** 2026-02-04 00:33:24.692344 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.692359 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.692371 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.692380 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.692388 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.692397 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.692406 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.692415 | orchestrator | 2026-02-04 00:33:24.692423 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-04 00:33:24.692432 | orchestrator | Wednesday 04 February 2026 00:33:18 +0000 (0:00:01.241) 0:08:00.841 **** 2026-02-04 00:33:24.692441 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.692449 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.692458 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.692476 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.692484 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.692493 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.692502 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.692510 | orchestrator | 2026-02-04 00:33:24.692519 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-04 00:33:24.692528 | orchestrator | 2026-02-04 00:33:24.692537 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-04 00:33:24.692546 | orchestrator | Wednesday 04 February 2026 00:33:19 +0000 (0:00:01.155) 0:08:01.997 **** 2026-02-04 00:33:24.692555 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:24.692564 | orchestrator | 2026-02-04 00:33:24.692572 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-04 00:33:24.692581 | orchestrator | Wednesday 04 February 2026 00:33:20 +0000 (0:00:00.773) 0:08:02.770 **** 2026-02-04 00:33:24.692590 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:24.692599 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:24.692607 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:24.692616 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:24.692625 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:24.692633 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:24.692642 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:24.692650 | orchestrator | 2026-02-04 00:33:24.692665 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-04 00:33:24.692674 | orchestrator | Wednesday 04 February 2026 00:33:21 +0000 (0:00:01.002) 0:08:03.772 **** 2026-02-04 00:33:24.692683 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:24.692692 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:24.692701 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:24.692710 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:24.692719 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:24.692727 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:24.692736 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:24.692744 | orchestrator | 2026-02-04 00:33:24.692782 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-04 00:33:24.692792 | orchestrator | Wednesday 04 February 2026 00:33:22 +0000 (0:00:01.118) 0:08:04.891 **** 2026-02-04 00:33:24.692801 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:33:24.692810 | orchestrator | 2026-02-04 00:33:24.692818 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-04 00:33:24.692827 | orchestrator | Wednesday 04 February 2026 00:33:23 +0000 (0:00:00.984) 0:08:05.876 **** 2026-02-04 00:33:24.692836 | orchestrator | ok: [testbed-manager] 2026-02-04 00:33:24.692844 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:33:24.692853 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:33:24.692862 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:33:24.692870 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:33:24.692879 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:33:24.692887 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:33:24.692896 | orchestrator | 2026-02-04 00:33:24.692915 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-04 00:33:26.304334 | orchestrator | Wednesday 04 February 2026 00:33:24 +0000 (0:00:00.829) 0:08:06.705 **** 2026-02-04 00:33:26.304481 | orchestrator | changed: [testbed-manager] 2026-02-04 00:33:26.304506 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:33:26.304521 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:33:26.304534 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:33:26.304548 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:33:26.304635 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:33:26.304644 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:33:26.304653 | orchestrator | 2026-02-04 00:33:26.304688 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:33:26.304697 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-04 00:33:26.304707 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 00:33:26.304715 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 00:33:26.304723 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 00:33:26.304731 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-04 00:33:26.304739 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-04 00:33:26.304747 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-04 00:33:26.304785 | orchestrator | 2026-02-04 00:33:26.304794 | orchestrator | 2026-02-04 00:33:26.304802 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:33:26.304810 | orchestrator | Wednesday 04 February 2026 00:33:25 +0000 (0:00:01.157) 0:08:07.862 **** 2026-02-04 00:33:26.304818 | orchestrator | =============================================================================== 2026-02-04 00:33:26.304826 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.73s 2026-02-04 00:33:26.304834 | orchestrator | osism.commons.packages : Download required packages -------------------- 41.13s 2026-02-04 00:33:26.304842 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.03s 2026-02-04 00:33:26.304850 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.88s 2026-02-04 00:33:26.304859 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.37s 2026-02-04 00:33:26.304868 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.25s 2026-02-04 00:33:26.304877 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.62s 2026-02-04 00:33:26.304888 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.96s 2026-02-04 00:33:26.304897 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.80s 2026-02-04 00:33:26.304907 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.93s 2026-02-04 00:33:26.304917 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.89s 2026-02-04 00:33:26.304926 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.75s 2026-02-04 00:33:26.304935 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.67s 2026-02-04 00:33:26.304962 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.47s 2026-02-04 00:33:26.304986 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.88s 2026-02-04 00:33:26.305000 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.63s 2026-02-04 00:33:26.305013 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.81s 2026-02-04 00:33:26.305027 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.12s 2026-02-04 00:33:26.305041 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.49s 2026-02-04 00:33:26.305055 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.13s 2026-02-04 00:33:26.668966 | orchestrator | + osism apply fail2ban 2026-02-04 00:33:39.412885 | orchestrator | 2026-02-04 00:33:39 | INFO  | Task 215ded97-3d46-451a-aee8-1a4f9b39fb8e (fail2ban) was prepared for execution. 2026-02-04 00:33:39.412981 | orchestrator | 2026-02-04 00:33:39 | INFO  | It takes a moment until task 215ded97-3d46-451a-aee8-1a4f9b39fb8e (fail2ban) has been started and output is visible here. 2026-02-04 00:34:01.288026 | orchestrator | 2026-02-04 00:34:01.288121 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-04 00:34:01.288129 | orchestrator | 2026-02-04 00:34:01.288134 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-04 00:34:01.288139 | orchestrator | Wednesday 04 February 2026 00:33:43 +0000 (0:00:00.242) 0:00:00.242 **** 2026-02-04 00:34:01.288144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:34:01.288150 | orchestrator | 2026-02-04 00:34:01.288154 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-04 00:34:01.288159 | orchestrator | Wednesday 04 February 2026 00:33:44 +0000 (0:00:01.086) 0:00:01.329 **** 2026-02-04 00:34:01.288163 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:01.288168 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:01.288172 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:01.288175 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:01.288179 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:01.288183 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:01.288187 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:01.288191 | orchestrator | 2026-02-04 00:34:01.288195 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-04 00:34:01.288199 | orchestrator | Wednesday 04 February 2026 00:33:56 +0000 (0:00:11.537) 0:00:12.867 **** 2026-02-04 00:34:01.288203 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:01.288207 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:01.288211 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:01.288215 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:01.288218 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:01.288222 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:01.288226 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:01.288230 | orchestrator | 2026-02-04 00:34:01.288234 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-04 00:34:01.288238 | orchestrator | Wednesday 04 February 2026 00:33:57 +0000 (0:00:01.532) 0:00:14.399 **** 2026-02-04 00:34:01.288242 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:01.288247 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:01.288251 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:01.288255 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:01.288258 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:01.288262 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:01.288266 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:01.288270 | orchestrator | 2026-02-04 00:34:01.288274 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-04 00:34:01.288278 | orchestrator | Wednesday 04 February 2026 00:33:59 +0000 (0:00:01.461) 0:00:15.861 **** 2026-02-04 00:34:01.288282 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:01.288286 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:01.288289 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:01.288293 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:01.288297 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:01.288301 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:01.288305 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:01.288309 | orchestrator | 2026-02-04 00:34:01.288313 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:34:01.288317 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288337 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288341 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288345 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288349 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288353 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288357 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:34:01.288361 | orchestrator | 2026-02-04 00:34:01.288364 | orchestrator | 2026-02-04 00:34:01.288368 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:34:01.288372 | orchestrator | Wednesday 04 February 2026 00:34:00 +0000 (0:00:01.673) 0:00:17.534 **** 2026-02-04 00:34:01.288376 | orchestrator | =============================================================================== 2026-02-04 00:34:01.288380 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.54s 2026-02-04 00:34:01.288384 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-02-04 00:34:01.288387 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.53s 2026-02-04 00:34:01.288391 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.46s 2026-02-04 00:34:01.288395 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.09s 2026-02-04 00:34:01.700758 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-04 00:34:01.700894 | orchestrator | + osism apply network 2026-02-04 00:34:13.879990 | orchestrator | 2026-02-04 00:34:13 | INFO  | Task 2faf9af6-6da2-4d38-8725-fd340b2e775a (network) was prepared for execution. 2026-02-04 00:34:13.880098 | orchestrator | 2026-02-04 00:34:13 | INFO  | It takes a moment until task 2faf9af6-6da2-4d38-8725-fd340b2e775a (network) has been started and output is visible here. 2026-02-04 00:34:43.409810 | orchestrator | 2026-02-04 00:34:43.409918 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-04 00:34:43.409932 | orchestrator | 2026-02-04 00:34:43.409940 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-04 00:34:43.409948 | orchestrator | Wednesday 04 February 2026 00:34:18 +0000 (0:00:00.256) 0:00:00.256 **** 2026-02-04 00:34:43.409955 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.409962 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.409969 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.409976 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.409983 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.409990 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.409996 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.410003 | orchestrator | 2026-02-04 00:34:43.410010 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-04 00:34:43.410061 | orchestrator | Wednesday 04 February 2026 00:34:18 +0000 (0:00:00.700) 0:00:00.956 **** 2026-02-04 00:34:43.410070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:34:43.410079 | orchestrator | 2026-02-04 00:34:43.410087 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-04 00:34:43.410094 | orchestrator | Wednesday 04 February 2026 00:34:20 +0000 (0:00:01.221) 0:00:02.178 **** 2026-02-04 00:34:43.410127 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.410140 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.410152 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.410163 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.410174 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.410188 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.410200 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.410212 | orchestrator | 2026-02-04 00:34:43.410224 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-04 00:34:43.410237 | orchestrator | Wednesday 04 February 2026 00:34:22 +0000 (0:00:02.057) 0:00:04.235 **** 2026-02-04 00:34:43.410250 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.410262 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.410274 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.410285 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.410293 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.410300 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.410311 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.410320 | orchestrator | 2026-02-04 00:34:43.410328 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-04 00:34:43.410335 | orchestrator | Wednesday 04 February 2026 00:34:24 +0000 (0:00:02.148) 0:00:06.384 **** 2026-02-04 00:34:43.410343 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-04 00:34:43.410352 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-04 00:34:43.410360 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-04 00:34:43.410369 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-04 00:34:43.410377 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-04 00:34:43.410386 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-04 00:34:43.410394 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-04 00:34:43.410402 | orchestrator | 2026-02-04 00:34:43.410429 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-04 00:34:43.410438 | orchestrator | Wednesday 04 February 2026 00:34:25 +0000 (0:00:00.984) 0:00:07.368 **** 2026-02-04 00:34:43.410446 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 00:34:43.410455 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:34:43.410464 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 00:34:43.410472 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:34:43.410481 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 00:34:43.410489 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 00:34:43.410498 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 00:34:43.410506 | orchestrator | 2026-02-04 00:34:43.410515 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-04 00:34:43.410523 | orchestrator | Wednesday 04 February 2026 00:34:28 +0000 (0:00:03.282) 0:00:10.651 **** 2026-02-04 00:34:43.410532 | orchestrator | changed: [testbed-manager] 2026-02-04 00:34:43.410540 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:43.410548 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:43.410557 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:43.410565 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:43.410577 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:43.410586 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:43.410595 | orchestrator | 2026-02-04 00:34:43.410603 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-04 00:34:43.410611 | orchestrator | Wednesday 04 February 2026 00:34:30 +0000 (0:00:01.876) 0:00:12.527 **** 2026-02-04 00:34:43.410619 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:34:43.410628 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:34:43.410636 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 00:34:43.410644 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 00:34:43.410653 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 00:34:43.410670 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 00:34:43.410679 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 00:34:43.410687 | orchestrator | 2026-02-04 00:34:43.410696 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-04 00:34:43.410704 | orchestrator | Wednesday 04 February 2026 00:34:32 +0000 (0:00:01.717) 0:00:14.244 **** 2026-02-04 00:34:43.410712 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.410720 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.410727 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.410734 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.410741 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.410749 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.410756 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.410781 | orchestrator | 2026-02-04 00:34:43.410795 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-04 00:34:43.410824 | orchestrator | Wednesday 04 February 2026 00:34:33 +0000 (0:00:01.227) 0:00:15.472 **** 2026-02-04 00:34:43.410832 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:43.410840 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:43.410847 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:43.410854 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:43.410861 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:43.410868 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:43.410875 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:43.410882 | orchestrator | 2026-02-04 00:34:43.410889 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-04 00:34:43.410897 | orchestrator | Wednesday 04 February 2026 00:34:34 +0000 (0:00:00.646) 0:00:16.118 **** 2026-02-04 00:34:43.410904 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.410911 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.410918 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.410926 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.410933 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.410940 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.410947 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.410954 | orchestrator | 2026-02-04 00:34:43.410961 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-04 00:34:43.410968 | orchestrator | Wednesday 04 February 2026 00:34:36 +0000 (0:00:02.435) 0:00:18.553 **** 2026-02-04 00:34:43.410975 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:43.410983 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:43.410990 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:43.410997 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:43.411004 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:43.411011 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:43.411019 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-04 00:34:43.411028 | orchestrator | 2026-02-04 00:34:43.411035 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-04 00:34:43.411042 | orchestrator | Wednesday 04 February 2026 00:34:37 +0000 (0:00:00.886) 0:00:19.440 **** 2026-02-04 00:34:43.411050 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.411057 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:34:43.411064 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:34:43.411071 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:34:43.411078 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:34:43.411085 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:34:43.411092 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:34:43.411099 | orchestrator | 2026-02-04 00:34:43.411106 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-04 00:34:43.411114 | orchestrator | Wednesday 04 February 2026 00:34:39 +0000 (0:00:01.790) 0:00:21.231 **** 2026-02-04 00:34:43.411121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:34:43.411136 | orchestrator | 2026-02-04 00:34:43.411143 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-04 00:34:43.411150 | orchestrator | Wednesday 04 February 2026 00:34:40 +0000 (0:00:01.242) 0:00:22.473 **** 2026-02-04 00:34:43.411157 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.411164 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.411171 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.411178 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.411186 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.411193 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.411200 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.411207 | orchestrator | 2026-02-04 00:34:43.411214 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-04 00:34:43.411221 | orchestrator | Wednesday 04 February 2026 00:34:41 +0000 (0:00:00.994) 0:00:23.467 **** 2026-02-04 00:34:43.411229 | orchestrator | ok: [testbed-manager] 2026-02-04 00:34:43.411236 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:34:43.411243 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:34:43.411250 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:34:43.411257 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:34:43.411264 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:34:43.411271 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:34:43.411278 | orchestrator | 2026-02-04 00:34:43.411286 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-04 00:34:43.411293 | orchestrator | Wednesday 04 February 2026 00:34:42 +0000 (0:00:00.800) 0:00:24.268 **** 2026-02-04 00:34:43.411305 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411312 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411320 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411327 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411334 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411341 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411348 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411355 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411362 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411369 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411376 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411384 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 00:34:43.411391 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411398 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 00:34:43.411405 | orchestrator | 2026-02-04 00:34:43.411417 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-04 00:34:58.479793 | orchestrator | Wednesday 04 February 2026 00:34:43 +0000 (0:00:01.229) 0:00:25.498 **** 2026-02-04 00:34:58.479883 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:34:58.479892 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:34:58.479898 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:34:58.479904 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:34:58.479910 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:34:58.479916 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:34:58.479922 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:34:58.479927 | orchestrator | 2026-02-04 00:34:58.479935 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-04 00:34:58.479959 | orchestrator | Wednesday 04 February 2026 00:34:43 +0000 (0:00:00.597) 0:00:26.095 **** 2026-02-04 00:34:58.479967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-3, testbed-node-2, testbed-node-1, testbed-node-4, testbed-node-5 2026-02-04 00:34:58.479975 | orchestrator | 2026-02-04 00:34:58.479981 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-04 00:34:58.479986 | orchestrator | Wednesday 04 February 2026 00:34:48 +0000 (0:00:04.076) 0:00:30.172 **** 2026-02-04 00:34:58.479993 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480007 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480027 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480033 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480045 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480067 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480102 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480108 | orchestrator | 2026-02-04 00:34:58.480114 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-04 00:34:58.480120 | orchestrator | Wednesday 04 February 2026 00:34:53 +0000 (0:00:05.113) 0:00:35.286 **** 2026-02-04 00:34:58.480126 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480156 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-04 00:34:58.480176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480182 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:34:58.480235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:35:04.259298 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-04 00:35:04.259475 | orchestrator | 2026-02-04 00:35:04.259497 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-04 00:35:04.259510 | orchestrator | Wednesday 04 February 2026 00:34:58 +0000 (0:00:05.278) 0:00:40.565 **** 2026-02-04 00:35:04.259524 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:35:04.259536 | orchestrator | 2026-02-04 00:35:04.259547 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-04 00:35:04.259558 | orchestrator | Wednesday 04 February 2026 00:34:59 +0000 (0:00:01.105) 0:00:41.670 **** 2026-02-04 00:35:04.259569 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:04.259581 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:35:04.259592 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:35:04.259603 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:35:04.259614 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:35:04.259625 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:35:04.259636 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:35:04.259647 | orchestrator | 2026-02-04 00:35:04.259658 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-04 00:35:04.259669 | orchestrator | Wednesday 04 February 2026 00:35:00 +0000 (0:00:01.070) 0:00:42.741 **** 2026-02-04 00:35:04.259680 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.259692 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.259703 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.259715 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.259726 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.259736 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.259748 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.259819 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.259832 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.259846 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.259860 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.259873 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.259886 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.259899 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.259912 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.259947 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.259961 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.259974 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.259986 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.260000 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.260029 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.260043 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.260056 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.260070 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.260083 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.260096 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.260109 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.260122 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.260134 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.260147 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.260160 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 00:35:04.260173 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 00:35:04.260186 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 00:35:04.260197 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 00:35:04.260208 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.260219 | orchestrator | 2026-02-04 00:35:04.260230 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-04 00:35:04.260259 | orchestrator | Wednesday 04 February 2026 00:35:02 +0000 (0:00:01.942) 0:00:44.683 **** 2026-02-04 00:35:04.260317 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.260330 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.260341 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.260380 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.260391 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.260402 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.260413 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.260451 | orchestrator | 2026-02-04 00:35:04.260462 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-04 00:35:04.260473 | orchestrator | Wednesday 04 February 2026 00:35:03 +0000 (0:00:00.635) 0:00:45.319 **** 2026-02-04 00:35:04.260484 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:35:04.260495 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:35:04.260506 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:35:04.260517 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:35:04.260528 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:35:04.260539 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:35:04.260550 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:35:04.260561 | orchestrator | 2026-02-04 00:35:04.260572 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:35:04.260584 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:35:04.260597 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 00:35:04.260619 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 00:35:04.260630 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 00:35:04.260641 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 00:35:04.260652 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 00:35:04.260663 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 00:35:04.260674 | orchestrator | 2026-02-04 00:35:04.260716 | orchestrator | 2026-02-04 00:35:04.260795 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:35:04.260810 | orchestrator | Wednesday 04 February 2026 00:35:03 +0000 (0:00:00.687) 0:00:46.006 **** 2026-02-04 00:35:04.260821 | orchestrator | =============================================================================== 2026-02-04 00:35:04.260832 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.28s 2026-02-04 00:35:04.260842 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.11s 2026-02-04 00:35:04.260853 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.08s 2026-02-04 00:35:04.260864 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.28s 2026-02-04 00:35:04.260902 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.44s 2026-02-04 00:35:04.260915 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 2.15s 2026-02-04 00:35:04.260926 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.06s 2026-02-04 00:35:04.260937 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.94s 2026-02-04 00:35:04.260954 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.88s 2026-02-04 00:35:04.260965 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.79s 2026-02-04 00:35:04.260976 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.72s 2026-02-04 00:35:04.260987 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2026-02-04 00:35:04.260998 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.23s 2026-02-04 00:35:04.261009 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.23s 2026-02-04 00:35:04.261019 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2026-02-04 00:35:04.261030 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2026-02-04 00:35:04.261041 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.07s 2026-02-04 00:35:04.261052 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2026-02-04 00:35:04.261063 | orchestrator | osism.commons.network : Create required directories --------------------- 0.98s 2026-02-04 00:35:04.261073 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.89s 2026-02-04 00:35:04.562508 | orchestrator | + osism apply wireguard 2026-02-04 00:35:16.509190 | orchestrator | 2026-02-04 00:35:16 | INFO  | Task 394c1334-171d-4ecd-b3cb-141e5f98ac27 (wireguard) was prepared for execution. 2026-02-04 00:35:16.509309 | orchestrator | 2026-02-04 00:35:16 | INFO  | It takes a moment until task 394c1334-171d-4ecd-b3cb-141e5f98ac27 (wireguard) has been started and output is visible here. 2026-02-04 00:35:35.093658 | orchestrator | 2026-02-04 00:35:35.093796 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-04 00:35:35.093845 | orchestrator | 2026-02-04 00:35:35.093855 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-04 00:35:35.093863 | orchestrator | Wednesday 04 February 2026 00:35:20 +0000 (0:00:00.165) 0:00:00.165 **** 2026-02-04 00:35:35.093871 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:35.093879 | orchestrator | 2026-02-04 00:35:35.093887 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-04 00:35:35.093895 | orchestrator | Wednesday 04 February 2026 00:35:21 +0000 (0:00:01.218) 0:00:01.384 **** 2026-02-04 00:35:35.093902 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.093910 | orchestrator | 2026-02-04 00:35:35.093922 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-04 00:35:35.093930 | orchestrator | Wednesday 04 February 2026 00:35:27 +0000 (0:00:05.804) 0:00:07.189 **** 2026-02-04 00:35:35.093937 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.093945 | orchestrator | 2026-02-04 00:35:35.093952 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-04 00:35:35.093959 | orchestrator | Wednesday 04 February 2026 00:35:28 +0000 (0:00:00.548) 0:00:07.737 **** 2026-02-04 00:35:35.093966 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.093974 | orchestrator | 2026-02-04 00:35:35.093981 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-04 00:35:35.093988 | orchestrator | Wednesday 04 February 2026 00:35:28 +0000 (0:00:00.435) 0:00:08.172 **** 2026-02-04 00:35:35.093995 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:35.094002 | orchestrator | 2026-02-04 00:35:35.094010 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-04 00:35:35.094061 | orchestrator | Wednesday 04 February 2026 00:35:29 +0000 (0:00:00.674) 0:00:08.847 **** 2026-02-04 00:35:35.094079 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:35.094086 | orchestrator | 2026-02-04 00:35:35.094094 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-04 00:35:35.094101 | orchestrator | Wednesday 04 February 2026 00:35:29 +0000 (0:00:00.404) 0:00:09.251 **** 2026-02-04 00:35:35.094108 | orchestrator | ok: [testbed-manager] 2026-02-04 00:35:35.094116 | orchestrator | 2026-02-04 00:35:35.094123 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-04 00:35:35.094139 | orchestrator | Wednesday 04 February 2026 00:35:29 +0000 (0:00:00.444) 0:00:09.696 **** 2026-02-04 00:35:35.094147 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.094154 | orchestrator | 2026-02-04 00:35:35.094161 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-04 00:35:35.094168 | orchestrator | Wednesday 04 February 2026 00:35:31 +0000 (0:00:01.166) 0:00:10.862 **** 2026-02-04 00:35:35.094176 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 00:35:35.094183 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.094190 | orchestrator | 2026-02-04 00:35:35.094198 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-04 00:35:35.094207 | orchestrator | Wednesday 04 February 2026 00:35:32 +0000 (0:00:00.931) 0:00:11.794 **** 2026-02-04 00:35:35.094216 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.094225 | orchestrator | 2026-02-04 00:35:35.094234 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-04 00:35:35.094242 | orchestrator | Wednesday 04 February 2026 00:35:33 +0000 (0:00:01.669) 0:00:13.464 **** 2026-02-04 00:35:35.094251 | orchestrator | changed: [testbed-manager] 2026-02-04 00:35:35.094259 | orchestrator | 2026-02-04 00:35:35.094268 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:35:35.094277 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:35:35.094286 | orchestrator | 2026-02-04 00:35:35.094296 | orchestrator | 2026-02-04 00:35:35.094309 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:35:35.094331 | orchestrator | Wednesday 04 February 2026 00:35:34 +0000 (0:00:00.958) 0:00:14.423 **** 2026-02-04 00:35:35.094345 | orchestrator | =============================================================================== 2026-02-04 00:35:35.094359 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.81s 2026-02-04 00:35:35.094372 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.67s 2026-02-04 00:35:35.094384 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.22s 2026-02-04 00:35:35.094396 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2026-02-04 00:35:35.094406 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2026-02-04 00:35:35.094415 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2026-02-04 00:35:35.094423 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.67s 2026-02-04 00:35:35.094432 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2026-02-04 00:35:35.094441 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-02-04 00:35:35.094453 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-02-04 00:35:35.094469 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.40s 2026-02-04 00:35:35.366810 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-04 00:35:35.398351 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-04 00:35:35.398440 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-04 00:35:35.477307 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 176 0 --:--:-- --:--:-- --:--:-- 177 2026-02-04 00:35:35.491196 | orchestrator | + osism apply --environment custom workarounds 2026-02-04 00:35:37.368151 | orchestrator | 2026-02-04 00:35:37 | INFO  | Trying to run play workarounds in environment custom 2026-02-04 00:35:47.566643 | orchestrator | 2026-02-04 00:35:47 | INFO  | Task 2d686b6c-ffb0-46da-a974-b0b1e08c787f (workarounds) was prepared for execution. 2026-02-04 00:35:47.566844 | orchestrator | 2026-02-04 00:35:47 | INFO  | It takes a moment until task 2d686b6c-ffb0-46da-a974-b0b1e08c787f (workarounds) has been started and output is visible here. 2026-02-04 00:36:11.415052 | orchestrator | 2026-02-04 00:36:11.415196 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:36:11.415217 | orchestrator | 2026-02-04 00:36:11.415230 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-04 00:36:11.415242 | orchestrator | Wednesday 04 February 2026 00:35:51 +0000 (0:00:00.092) 0:00:00.092 **** 2026-02-04 00:36:11.415253 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415264 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415275 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415287 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415298 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415308 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415318 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-04 00:36:11.415328 | orchestrator | 2026-02-04 00:36:11.415338 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-04 00:36:11.415347 | orchestrator | 2026-02-04 00:36:11.415357 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-04 00:36:11.415367 | orchestrator | Wednesday 04 February 2026 00:35:51 +0000 (0:00:00.588) 0:00:00.680 **** 2026-02-04 00:36:11.415377 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:11.415388 | orchestrator | 2026-02-04 00:36:11.415416 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-04 00:36:11.415426 | orchestrator | 2026-02-04 00:36:11.415436 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-04 00:36:11.415446 | orchestrator | Wednesday 04 February 2026 00:35:53 +0000 (0:00:02.157) 0:00:02.837 **** 2026-02-04 00:36:11.415457 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:11.415466 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:11.415476 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:11.415485 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:11.415495 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:11.415504 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:11.415514 | orchestrator | 2026-02-04 00:36:11.415523 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-04 00:36:11.415533 | orchestrator | 2026-02-04 00:36:11.415545 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-04 00:36:11.415560 | orchestrator | Wednesday 04 February 2026 00:35:55 +0000 (0:00:01.809) 0:00:04.646 **** 2026-02-04 00:36:11.415598 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:36:11.415619 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:36:11.415635 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:36:11.415649 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:36:11.415663 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:36:11.415691 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 00:36:11.415730 | orchestrator | 2026-02-04 00:36:11.415748 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-04 00:36:11.415764 | orchestrator | Wednesday 04 February 2026 00:35:57 +0000 (0:00:01.476) 0:00:06.123 **** 2026-02-04 00:36:11.415774 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:11.415784 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:11.415793 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:11.415803 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:11.415812 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:11.415822 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:11.415832 | orchestrator | 2026-02-04 00:36:11.415841 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-04 00:36:11.415851 | orchestrator | Wednesday 04 February 2026 00:36:00 +0000 (0:00:03.588) 0:00:09.711 **** 2026-02-04 00:36:11.415861 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:36:11.415870 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:36:11.415881 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:36:11.415890 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:36:11.415900 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:36:11.415910 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:36:11.415919 | orchestrator | 2026-02-04 00:36:11.415929 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-04 00:36:11.415939 | orchestrator | 2026-02-04 00:36:11.415948 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-04 00:36:11.415958 | orchestrator | Wednesday 04 February 2026 00:36:01 +0000 (0:00:00.645) 0:00:10.357 **** 2026-02-04 00:36:11.415968 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:11.415989 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:11.415999 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:11.416008 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:11.416018 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:11.416027 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:11.416037 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:11.416055 | orchestrator | 2026-02-04 00:36:11.416065 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-04 00:36:11.416075 | orchestrator | Wednesday 04 February 2026 00:36:02 +0000 (0:00:01.511) 0:00:11.868 **** 2026-02-04 00:36:11.416084 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:11.416094 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:11.416103 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:11.416113 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:11.416122 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:11.416132 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:11.416158 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:11.416168 | orchestrator | 2026-02-04 00:36:11.416178 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-04 00:36:11.416187 | orchestrator | Wednesday 04 February 2026 00:36:04 +0000 (0:00:01.541) 0:00:13.409 **** 2026-02-04 00:36:11.416197 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:11.416207 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:11.416216 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:11.416226 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:11.416236 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:11.416246 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:11.416255 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:11.416265 | orchestrator | 2026-02-04 00:36:11.416275 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-04 00:36:11.416285 | orchestrator | Wednesday 04 February 2026 00:36:06 +0000 (0:00:01.521) 0:00:14.930 **** 2026-02-04 00:36:11.416294 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:11.416304 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:11.416313 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:11.416323 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:11.416332 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:11.416342 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:11.416351 | orchestrator | changed: [testbed-manager] 2026-02-04 00:36:11.416361 | orchestrator | 2026-02-04 00:36:11.416371 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-04 00:36:11.416380 | orchestrator | Wednesday 04 February 2026 00:36:07 +0000 (0:00:01.742) 0:00:16.673 **** 2026-02-04 00:36:11.416390 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:36:11.416400 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:36:11.416410 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:36:11.416419 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:36:11.416429 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:36:11.416438 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:36:11.416448 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:36:11.416457 | orchestrator | 2026-02-04 00:36:11.416467 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-04 00:36:11.416477 | orchestrator | 2026-02-04 00:36:11.416487 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-04 00:36:11.416496 | orchestrator | Wednesday 04 February 2026 00:36:08 +0000 (0:00:00.631) 0:00:17.304 **** 2026-02-04 00:36:11.416506 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:36:11.416516 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:36:11.416525 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:36:11.416535 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:36:11.416544 | orchestrator | ok: [testbed-manager] 2026-02-04 00:36:11.416554 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:36:11.416565 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:36:11.416582 | orchestrator | 2026-02-04 00:36:11.416597 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:36:11.416613 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:36:11.416630 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:11.416653 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:11.416676 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:11.416692 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:11.416742 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:11.416761 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:11.416776 | orchestrator | 2026-02-04 00:36:11.416793 | orchestrator | 2026-02-04 00:36:11.416810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:36:11.416826 | orchestrator | Wednesday 04 February 2026 00:36:11 +0000 (0:00:02.984) 0:00:20.289 **** 2026-02-04 00:36:11.416843 | orchestrator | =============================================================================== 2026-02-04 00:36:11.416860 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.59s 2026-02-04 00:36:11.416876 | orchestrator | Install python3-docker -------------------------------------------------- 2.98s 2026-02-04 00:36:11.416892 | orchestrator | Apply netplan configuration --------------------------------------------- 2.16s 2026-02-04 00:36:11.416906 | orchestrator | Apply netplan configuration --------------------------------------------- 1.81s 2026-02-04 00:36:11.416916 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2026-02-04 00:36:11.416926 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.54s 2026-02-04 00:36:11.416941 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-02-04 00:36:11.416957 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.51s 2026-02-04 00:36:11.416972 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2026-02-04 00:36:11.416988 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.65s 2026-02-04 00:36:11.417002 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2026-02-04 00:36:11.417033 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.59s 2026-02-04 00:36:12.020126 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-04 00:36:23.963135 | orchestrator | 2026-02-04 00:36:23 | INFO  | Task 03393be3-6403-4134-befa-1869fd457855 (reboot) was prepared for execution. 2026-02-04 00:36:23.963267 | orchestrator | 2026-02-04 00:36:23 | INFO  | It takes a moment until task 03393be3-6403-4134-befa-1869fd457855 (reboot) has been started and output is visible here. 2026-02-04 00:36:34.060427 | orchestrator | 2026-02-04 00:36:34.060570 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:36:34.060598 | orchestrator | 2026-02-04 00:36:34.060617 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:36:34.060635 | orchestrator | Wednesday 04 February 2026 00:36:28 +0000 (0:00:00.205) 0:00:00.205 **** 2026-02-04 00:36:34.060654 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:36:34.060674 | orchestrator | 2026-02-04 00:36:34.060692 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:36:34.060743 | orchestrator | Wednesday 04 February 2026 00:36:28 +0000 (0:00:00.113) 0:00:00.318 **** 2026-02-04 00:36:34.060761 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:36:34.060778 | orchestrator | 2026-02-04 00:36:34.060797 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:36:34.060851 | orchestrator | Wednesday 04 February 2026 00:36:29 +0000 (0:00:00.911) 0:00:01.230 **** 2026-02-04 00:36:34.060874 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:36:34.060892 | orchestrator | 2026-02-04 00:36:34.060911 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:36:34.060929 | orchestrator | 2026-02-04 00:36:34.060949 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:36:34.060970 | orchestrator | Wednesday 04 February 2026 00:36:29 +0000 (0:00:00.117) 0:00:01.348 **** 2026-02-04 00:36:34.060989 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:36:34.061007 | orchestrator | 2026-02-04 00:36:34.061025 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:36:34.061045 | orchestrator | Wednesday 04 February 2026 00:36:29 +0000 (0:00:00.097) 0:00:01.445 **** 2026-02-04 00:36:34.061063 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:36:34.061081 | orchestrator | 2026-02-04 00:36:34.061099 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:36:34.061119 | orchestrator | Wednesday 04 February 2026 00:36:29 +0000 (0:00:00.644) 0:00:02.089 **** 2026-02-04 00:36:34.061137 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:36:34.061155 | orchestrator | 2026-02-04 00:36:34.061174 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:36:34.061192 | orchestrator | 2026-02-04 00:36:34.061211 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:36:34.061231 | orchestrator | Wednesday 04 February 2026 00:36:30 +0000 (0:00:00.105) 0:00:02.195 **** 2026-02-04 00:36:34.061251 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:36:34.061271 | orchestrator | 2026-02-04 00:36:34.061290 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:36:34.061310 | orchestrator | Wednesday 04 February 2026 00:36:30 +0000 (0:00:00.216) 0:00:02.412 **** 2026-02-04 00:36:34.061328 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:36:34.061347 | orchestrator | 2026-02-04 00:36:34.061365 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:36:34.061405 | orchestrator | Wednesday 04 February 2026 00:36:30 +0000 (0:00:00.671) 0:00:03.083 **** 2026-02-04 00:36:34.061423 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:36:34.061444 | orchestrator | 2026-02-04 00:36:34.061461 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:36:34.061479 | orchestrator | 2026-02-04 00:36:34.061497 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:36:34.061513 | orchestrator | Wednesday 04 February 2026 00:36:31 +0000 (0:00:00.110) 0:00:03.193 **** 2026-02-04 00:36:34.061528 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:36:34.061547 | orchestrator | 2026-02-04 00:36:34.061565 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:36:34.061584 | orchestrator | Wednesday 04 February 2026 00:36:31 +0000 (0:00:00.108) 0:00:03.302 **** 2026-02-04 00:36:34.061603 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:36:34.061621 | orchestrator | 2026-02-04 00:36:34.061638 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:36:34.061656 | orchestrator | Wednesday 04 February 2026 00:36:31 +0000 (0:00:00.658) 0:00:03.961 **** 2026-02-04 00:36:34.061675 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:36:34.061695 | orchestrator | 2026-02-04 00:36:34.061745 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:36:34.061763 | orchestrator | 2026-02-04 00:36:34.061780 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:36:34.061799 | orchestrator | Wednesday 04 February 2026 00:36:31 +0000 (0:00:00.122) 0:00:04.084 **** 2026-02-04 00:36:34.061817 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:36:34.061837 | orchestrator | 2026-02-04 00:36:34.061856 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:36:34.061874 | orchestrator | Wednesday 04 February 2026 00:36:32 +0000 (0:00:00.108) 0:00:04.192 **** 2026-02-04 00:36:34.061912 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:36:34.061924 | orchestrator | 2026-02-04 00:36:34.061935 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:36:34.061945 | orchestrator | Wednesday 04 February 2026 00:36:32 +0000 (0:00:00.671) 0:00:04.863 **** 2026-02-04 00:36:34.061956 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:36:34.061967 | orchestrator | 2026-02-04 00:36:34.061979 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 00:36:34.061990 | orchestrator | 2026-02-04 00:36:34.062000 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 00:36:34.062011 | orchestrator | Wednesday 04 February 2026 00:36:32 +0000 (0:00:00.110) 0:00:04.974 **** 2026-02-04 00:36:34.062089 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:36:34.062102 | orchestrator | 2026-02-04 00:36:34.062113 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 00:36:34.062124 | orchestrator | Wednesday 04 February 2026 00:36:32 +0000 (0:00:00.129) 0:00:05.103 **** 2026-02-04 00:36:34.062135 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:36:34.062146 | orchestrator | 2026-02-04 00:36:34.062157 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 00:36:34.062168 | orchestrator | Wednesday 04 February 2026 00:36:33 +0000 (0:00:00.726) 0:00:05.829 **** 2026-02-04 00:36:34.062207 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:36:34.062219 | orchestrator | 2026-02-04 00:36:34.062230 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:36:34.062242 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:34.062255 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:34.062266 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:34.062277 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:34.062288 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:34.062298 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:36:34.062309 | orchestrator | 2026-02-04 00:36:34.062320 | orchestrator | 2026-02-04 00:36:34.062331 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:36:34.062342 | orchestrator | Wednesday 04 February 2026 00:36:33 +0000 (0:00:00.047) 0:00:05.877 **** 2026-02-04 00:36:34.062353 | orchestrator | =============================================================================== 2026-02-04 00:36:34.062364 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2026-02-04 00:36:34.062375 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.77s 2026-02-04 00:36:34.062386 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.61s 2026-02-04 00:36:34.317514 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-04 00:36:46.252685 | orchestrator | 2026-02-04 00:36:46 | INFO  | Task b385cfbe-f3a5-493f-89f0-40a412e31d86 (wait-for-connection) was prepared for execution. 2026-02-04 00:36:46.252801 | orchestrator | 2026-02-04 00:36:46 | INFO  | It takes a moment until task b385cfbe-f3a5-493f-89f0-40a412e31d86 (wait-for-connection) has been started and output is visible here. 2026-02-04 00:37:01.779043 | orchestrator | 2026-02-04 00:37:01.779164 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-04 00:37:01.779176 | orchestrator | 2026-02-04 00:37:01.779182 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-04 00:37:01.779188 | orchestrator | Wednesday 04 February 2026 00:36:49 +0000 (0:00:00.201) 0:00:00.201 **** 2026-02-04 00:37:01.779194 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:01.779201 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:01.779207 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:01.779212 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:01.779218 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:01.779237 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:01.779244 | orchestrator | 2026-02-04 00:37:01.779251 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:37:01.779258 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:01.779266 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:01.779273 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:01.779279 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:01.779285 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:01.779292 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:01.779298 | orchestrator | 2026-02-04 00:37:01.779305 | orchestrator | 2026-02-04 00:37:01.779312 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:37:01.779318 | orchestrator | Wednesday 04 February 2026 00:37:01 +0000 (0:00:11.491) 0:00:11.692 **** 2026-02-04 00:37:01.779325 | orchestrator | =============================================================================== 2026-02-04 00:37:01.779331 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.49s 2026-02-04 00:37:02.089925 | orchestrator | + osism apply hddtemp 2026-02-04 00:37:14.273457 | orchestrator | 2026-02-04 00:37:14 | INFO  | Task 24f8f226-5e0b-4ece-9ba7-82ace5e64b8d (hddtemp) was prepared for execution. 2026-02-04 00:37:14.273593 | orchestrator | 2026-02-04 00:37:14 | INFO  | It takes a moment until task 24f8f226-5e0b-4ece-9ba7-82ace5e64b8d (hddtemp) has been started and output is visible here. 2026-02-04 00:37:42.325417 | orchestrator | 2026-02-04 00:37:42.325597 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-04 00:37:42.325618 | orchestrator | 2026-02-04 00:37:42.325630 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-04 00:37:42.325641 | orchestrator | Wednesday 04 February 2026 00:37:17 +0000 (0:00:00.183) 0:00:00.183 **** 2026-02-04 00:37:42.325653 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:42.325665 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:42.325677 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:42.325752 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:42.325764 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:42.325775 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:42.325786 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:42.325798 | orchestrator | 2026-02-04 00:37:42.325809 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-04 00:37:42.325821 | orchestrator | Wednesday 04 February 2026 00:37:18 +0000 (0:00:00.511) 0:00:00.695 **** 2026-02-04 00:37:42.325834 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:37:42.325874 | orchestrator | 2026-02-04 00:37:42.325887 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-04 00:37:42.325898 | orchestrator | Wednesday 04 February 2026 00:37:19 +0000 (0:00:00.871) 0:00:01.566 **** 2026-02-04 00:37:42.325909 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:42.325920 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:42.325931 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:42.325942 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:42.325953 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:42.325967 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:42.325980 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:42.325993 | orchestrator | 2026-02-04 00:37:42.326006 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-04 00:37:42.326075 | orchestrator | Wednesday 04 February 2026 00:37:21 +0000 (0:00:02.063) 0:00:03.630 **** 2026-02-04 00:37:42.326091 | orchestrator | changed: [testbed-manager] 2026-02-04 00:37:42.326105 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:37:42.326117 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:37:42.326130 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:37:42.326142 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:37:42.326154 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:37:42.326167 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:37:42.326180 | orchestrator | 2026-02-04 00:37:42.326193 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-04 00:37:42.326206 | orchestrator | Wednesday 04 February 2026 00:37:22 +0000 (0:00:00.970) 0:00:04.600 **** 2026-02-04 00:37:42.326219 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:37:42.326231 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:37:42.326244 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:37:42.326256 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:37:42.326269 | orchestrator | ok: [testbed-manager] 2026-02-04 00:37:42.326281 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:37:42.326308 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:37:42.326320 | orchestrator | 2026-02-04 00:37:42.326331 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-04 00:37:42.326342 | orchestrator | Wednesday 04 February 2026 00:37:24 +0000 (0:00:02.026) 0:00:06.627 **** 2026-02-04 00:37:42.326353 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:37:42.326364 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:37:42.326375 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:37:42.326386 | orchestrator | changed: [testbed-manager] 2026-02-04 00:37:42.326397 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:37:42.326408 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:37:42.326419 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:37:42.326430 | orchestrator | 2026-02-04 00:37:42.326441 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-04 00:37:42.326452 | orchestrator | Wednesday 04 February 2026 00:37:24 +0000 (0:00:00.666) 0:00:07.293 **** 2026-02-04 00:37:42.326462 | orchestrator | changed: [testbed-manager] 2026-02-04 00:37:42.326473 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:37:42.326487 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:37:42.326505 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:37:42.326523 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:37:42.326543 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:37:42.326559 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:37:42.326570 | orchestrator | 2026-02-04 00:37:42.326581 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-04 00:37:42.326592 | orchestrator | Wednesday 04 February 2026 00:37:38 +0000 (0:00:13.994) 0:00:21.287 **** 2026-02-04 00:37:42.326604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:37:42.326615 | orchestrator | 2026-02-04 00:37:42.326636 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-04 00:37:42.326647 | orchestrator | Wednesday 04 February 2026 00:37:40 +0000 (0:00:01.187) 0:00:22.474 **** 2026-02-04 00:37:42.326658 | orchestrator | changed: [testbed-manager] 2026-02-04 00:37:42.326669 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:37:42.326706 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:37:42.326718 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:37:42.326729 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:37:42.326740 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:37:42.326751 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:37:42.326761 | orchestrator | 2026-02-04 00:37:42.326786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:37:42.326885 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:37:42.326919 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:37:42.326932 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:37:42.326943 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:37:42.326955 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:37:42.326966 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:37:42.326976 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:37:42.327019 | orchestrator | 2026-02-04 00:37:42.327031 | orchestrator | 2026-02-04 00:37:42.327042 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:37:42.327053 | orchestrator | Wednesday 04 February 2026 00:37:41 +0000 (0:00:01.904) 0:00:24.379 **** 2026-02-04 00:37:42.327064 | orchestrator | =============================================================================== 2026-02-04 00:37:42.327075 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.99s 2026-02-04 00:37:42.327086 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.06s 2026-02-04 00:37:42.327097 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.03s 2026-02-04 00:37:42.327108 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2026-02-04 00:37:42.327119 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.19s 2026-02-04 00:37:42.327130 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 0.97s 2026-02-04 00:37:42.327141 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.87s 2026-02-04 00:37:42.327155 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.67s 2026-02-04 00:37:42.327173 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.51s 2026-02-04 00:37:42.603948 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-04 00:37:42.653281 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 00:37:42.653364 | orchestrator | + sudo systemctl restart manager.service 2026-02-04 00:37:56.500035 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 00:37:56.500143 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-04 00:37:56.500161 | orchestrator | + local max_attempts=60 2026-02-04 00:37:56.500193 | orchestrator | + local name=ceph-ansible 2026-02-04 00:37:56.500205 | orchestrator | + local attempt_num=1 2026-02-04 00:37:56.500218 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:37:56.546966 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:37:56.547092 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:37:56.547118 | orchestrator | + sleep 5 2026-02-04 00:38:01.553629 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:01.597663 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:01.597785 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:01.597801 | orchestrator | + sleep 5 2026-02-04 00:38:06.601295 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:06.638092 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:06.638179 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:06.638190 | orchestrator | + sleep 5 2026-02-04 00:38:11.644351 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:11.682311 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:11.682396 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:11.682411 | orchestrator | + sleep 5 2026-02-04 00:38:16.686438 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:16.719231 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:16.719295 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:16.719302 | orchestrator | + sleep 5 2026-02-04 00:38:21.724339 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:21.757755 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:21.757875 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:21.757902 | orchestrator | + sleep 5 2026-02-04 00:38:26.762335 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:26.801926 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:26.802081 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:26.802099 | orchestrator | + sleep 5 2026-02-04 00:38:31.806004 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:31.847176 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:31.847270 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:31.847282 | orchestrator | + sleep 5 2026-02-04 00:38:36.850499 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:36.870010 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:36.870168 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:36.870183 | orchestrator | + sleep 5 2026-02-04 00:38:41.873179 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:41.911505 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:41.911598 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:41.911612 | orchestrator | + sleep 5 2026-02-04 00:38:46.917055 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:46.948126 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:46.948240 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:46.948258 | orchestrator | + sleep 5 2026-02-04 00:38:51.952367 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:51.990333 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:51.990428 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:51.990443 | orchestrator | + sleep 5 2026-02-04 00:38:56.995523 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:38:57.035577 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 00:38:57.035748 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 00:38:57.035768 | orchestrator | + sleep 5 2026-02-04 00:39:02.040222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 00:39:02.077929 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:39:02.078072 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 00:39:02.078089 | orchestrator | + local max_attempts=60 2026-02-04 00:39:02.078102 | orchestrator | + local name=kolla-ansible 2026-02-04 00:39:02.078113 | orchestrator | + local attempt_num=1 2026-02-04 00:39:02.078780 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 00:39:02.107560 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:39:02.107726 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 00:39:02.107749 | orchestrator | + local max_attempts=60 2026-02-04 00:39:02.107796 | orchestrator | + local name=osism-ansible 2026-02-04 00:39:02.107808 | orchestrator | + local attempt_num=1 2026-02-04 00:39:02.108063 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 00:39:02.136285 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 00:39:02.136378 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 00:39:02.136397 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-04 00:39:02.302316 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-04 00:39:02.435405 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-04 00:39:02.581952 | orchestrator | ARA in osism-ansible already disabled. 2026-02-04 00:39:02.718464 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-04 00:39:02.718586 | orchestrator | + osism apply gather-facts 2026-02-04 00:39:14.670187 | orchestrator | 2026-02-04 00:39:14 | INFO  | Task 4e2c341e-0d60-4963-b1f6-a93fd2ed7948 (gather-facts) was prepared for execution. 2026-02-04 00:39:14.670286 | orchestrator | 2026-02-04 00:39:14 | INFO  | It takes a moment until task 4e2c341e-0d60-4963-b1f6-a93fd2ed7948 (gather-facts) has been started and output is visible here. 2026-02-04 00:39:27.407828 | orchestrator | 2026-02-04 00:39:27.407913 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:39:27.407922 | orchestrator | 2026-02-04 00:39:27.407928 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:39:27.407934 | orchestrator | Wednesday 04 February 2026 00:39:18 +0000 (0:00:00.195) 0:00:00.195 **** 2026-02-04 00:39:27.407943 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:39:27.407952 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:39:27.407960 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:39:27.407967 | orchestrator | ok: [testbed-manager] 2026-02-04 00:39:27.407975 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:39:27.407983 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:39:27.407990 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:39:27.407998 | orchestrator | 2026-02-04 00:39:27.408006 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 00:39:27.408013 | orchestrator | 2026-02-04 00:39:27.408021 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 00:39:27.408028 | orchestrator | Wednesday 04 February 2026 00:39:26 +0000 (0:00:08.305) 0:00:08.501 **** 2026-02-04 00:39:27.408033 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:39:27.408038 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:39:27.408043 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:39:27.408048 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:39:27.408052 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:39:27.408057 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:39:27.408062 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:39:27.408067 | orchestrator | 2026-02-04 00:39:27.408072 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:39:27.408077 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408083 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408088 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408092 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408097 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408102 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408106 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 00:39:27.408130 | orchestrator | 2026-02-04 00:39:27.408135 | orchestrator | 2026-02-04 00:39:27.408140 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:39:27.408144 | orchestrator | Wednesday 04 February 2026 00:39:27 +0000 (0:00:00.478) 0:00:08.980 **** 2026-02-04 00:39:27.408149 | orchestrator | =============================================================================== 2026-02-04 00:39:27.408154 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.31s 2026-02-04 00:39:27.408158 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2026-02-04 00:39:27.727411 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-04 00:39:27.737132 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-04 00:39:27.746594 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-04 00:39:27.758337 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-04 00:39:27.770758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-04 00:39:27.781152 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-04 00:39:27.798286 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-04 00:39:27.813289 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-04 00:39:27.830589 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-04 00:39:27.842834 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-04 00:39:27.861457 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-04 00:39:27.882198 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-04 00:39:27.899455 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-04 00:39:27.912759 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-04 00:39:27.924576 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-04 00:39:27.935300 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-04 00:39:27.951383 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-04 00:39:27.963435 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-04 00:39:27.975970 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-04 00:39:27.986755 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-04 00:39:27.997361 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-04 00:39:28.008483 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-04 00:39:28.019443 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-04 00:39:28.029032 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-04 00:39:28.245180 | orchestrator | ok: Runtime: 0:23:35.126799 2026-02-04 00:39:28.345151 | 2026-02-04 00:39:28.345289 | TASK [Deploy services] 2026-02-04 00:39:28.877777 | orchestrator | skipping: Conditional result was False 2026-02-04 00:39:28.898954 | 2026-02-04 00:39:28.899214 | TASK [Deploy in a nutshell] 2026-02-04 00:39:29.614186 | orchestrator | + set -e 2026-02-04 00:39:29.614338 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 00:39:29.614353 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 00:39:29.614366 | orchestrator | ++ INTERACTIVE=false 2026-02-04 00:39:29.614374 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 00:39:29.614381 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 00:39:29.614390 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 00:39:29.614418 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 00:39:29.614435 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 00:39:29.614444 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 00:39:29.614453 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 00:39:29.614461 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 00:39:29.614472 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 00:39:29.614478 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 00:39:29.614491 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 00:39:29.614497 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 00:39:29.614506 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 00:39:29.614512 | orchestrator | ++ export ARA=false 2026-02-04 00:39:29.614519 | orchestrator | ++ ARA=false 2026-02-04 00:39:29.614526 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 00:39:29.614533 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 00:39:29.614539 | orchestrator | ++ export TEMPEST=true 2026-02-04 00:39:29.614545 | orchestrator | ++ TEMPEST=true 2026-02-04 00:39:29.614552 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 00:39:29.614558 | orchestrator | ++ IS_ZUUL=true 2026-02-04 00:39:29.614564 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:39:29.614571 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.40 2026-02-04 00:39:29.614577 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 00:39:29.614583 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 00:39:29.614590 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 00:39:29.614596 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 00:39:29.614602 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 00:39:29.614608 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 00:39:29.614615 | orchestrator | 2026-02-04 00:39:29.614622 | orchestrator | # PULL IMAGES 2026-02-04 00:39:29.614628 | orchestrator | 2026-02-04 00:39:29.614634 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 00:39:29.614704 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 00:39:29.614714 | orchestrator | + echo 2026-02-04 00:39:29.614732 | orchestrator | + echo '# PULL IMAGES' 2026-02-04 00:39:29.614739 | orchestrator | + echo 2026-02-04 00:39:29.614745 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 00:39:29.660892 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 00:39:29.661027 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-04 00:39:31.499327 | orchestrator | 2026-02-04 00:39:31 | INFO  | Trying to run play pull-images in environment custom 2026-02-04 00:39:41.701403 | orchestrator | 2026-02-04 00:39:41 | INFO  | Task cdcc4e5c-d15d-4512-bcf2-14ca768ba12e (pull-images) was prepared for execution. 2026-02-04 00:39:41.701792 | orchestrator | 2026-02-04 00:39:41 | INFO  | Task cdcc4e5c-d15d-4512-bcf2-14ca768ba12e is running in background. No more output. Check ARA for logs. 2026-02-04 00:39:43.964190 | orchestrator | 2026-02-04 00:39:43 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-04 00:39:54.089210 | orchestrator | 2026-02-04 00:39:54 | INFO  | Task ca82acc5-f33e-43b9-9b73-ff5796442e04 (wipe-partitions) was prepared for execution. 2026-02-04 00:39:54.092480 | orchestrator | 2026-02-04 00:39:54 | INFO  | It takes a moment until task ca82acc5-f33e-43b9-9b73-ff5796442e04 (wipe-partitions) has been started and output is visible here. 2026-02-04 00:40:05.945127 | orchestrator | 2026-02-04 00:40:05.945237 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-04 00:40:05.945253 | orchestrator | 2026-02-04 00:40:05.945263 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-04 00:40:05.945277 | orchestrator | Wednesday 04 February 2026 00:39:57 +0000 (0:00:00.093) 0:00:00.093 **** 2026-02-04 00:40:05.945287 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:40:05.945297 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:40:05.945307 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:40:05.945316 | orchestrator | 2026-02-04 00:40:05.945325 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-04 00:40:05.945359 | orchestrator | Wednesday 04 February 2026 00:39:58 +0000 (0:00:00.632) 0:00:00.726 **** 2026-02-04 00:40:05.945369 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:05.945378 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:05.945386 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:40:05.945400 | orchestrator | 2026-02-04 00:40:05.945409 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-04 00:40:05.945418 | orchestrator | Wednesday 04 February 2026 00:39:58 +0000 (0:00:00.287) 0:00:01.014 **** 2026-02-04 00:40:05.945427 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:05.945437 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:05.945446 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:05.945455 | orchestrator | 2026-02-04 00:40:05.945464 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-04 00:40:05.945473 | orchestrator | Wednesday 04 February 2026 00:39:59 +0000 (0:00:00.567) 0:00:01.581 **** 2026-02-04 00:40:05.945482 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:05.945491 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:05.945499 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:40:05.945508 | orchestrator | 2026-02-04 00:40:05.945517 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-04 00:40:05.945526 | orchestrator | Wednesday 04 February 2026 00:39:59 +0000 (0:00:00.205) 0:00:01.787 **** 2026-02-04 00:40:05.945535 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 00:40:05.945548 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 00:40:05.945557 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 00:40:05.945565 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 00:40:05.945574 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 00:40:05.945583 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 00:40:05.945592 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 00:40:05.945600 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 00:40:05.945609 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 00:40:05.945618 | orchestrator | 2026-02-04 00:40:05.945627 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-04 00:40:05.945663 | orchestrator | Wednesday 04 February 2026 00:40:00 +0000 (0:00:01.204) 0:00:02.991 **** 2026-02-04 00:40:05.945676 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 00:40:05.945687 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 00:40:05.945698 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 00:40:05.945708 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 00:40:05.945719 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 00:40:05.945728 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 00:40:05.945736 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 00:40:05.945745 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 00:40:05.945754 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 00:40:05.945763 | orchestrator | 2026-02-04 00:40:05.945772 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-04 00:40:05.945781 | orchestrator | Wednesday 04 February 2026 00:40:02 +0000 (0:00:01.553) 0:00:04.545 **** 2026-02-04 00:40:05.945789 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 00:40:05.945798 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 00:40:05.945807 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 00:40:05.945816 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 00:40:05.945825 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 00:40:05.945834 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 00:40:05.945842 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 00:40:05.945857 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 00:40:05.945874 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 00:40:05.945883 | orchestrator | 2026-02-04 00:40:05.945892 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-04 00:40:05.945901 | orchestrator | Wednesday 04 February 2026 00:40:04 +0000 (0:00:02.291) 0:00:06.836 **** 2026-02-04 00:40:05.945910 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:40:05.945919 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:40:05.945927 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:40:05.945936 | orchestrator | 2026-02-04 00:40:05.945945 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-04 00:40:05.945954 | orchestrator | Wednesday 04 February 2026 00:40:05 +0000 (0:00:00.613) 0:00:07.450 **** 2026-02-04 00:40:05.945963 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:40:05.945972 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:40:05.945981 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:40:05.945990 | orchestrator | 2026-02-04 00:40:05.945999 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:40:05.946010 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:05.946104 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:05.946131 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:05.946140 | orchestrator | 2026-02-04 00:40:05.946149 | orchestrator | 2026-02-04 00:40:05.946158 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:40:05.946167 | orchestrator | Wednesday 04 February 2026 00:40:05 +0000 (0:00:00.694) 0:00:08.145 **** 2026-02-04 00:40:05.946176 | orchestrator | =============================================================================== 2026-02-04 00:40:05.946185 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.29s 2026-02-04 00:40:05.946194 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.55s 2026-02-04 00:40:05.946203 | orchestrator | Check device availability ----------------------------------------------- 1.20s 2026-02-04 00:40:05.946212 | orchestrator | Request device events from the kernel ----------------------------------- 0.70s 2026-02-04 00:40:05.946220 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.63s 2026-02-04 00:40:05.946229 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-02-04 00:40:05.946238 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-02-04 00:40:05.946247 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2026-02-04 00:40:05.946256 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.21s 2026-02-04 00:40:17.832261 | orchestrator | 2026-02-04 00:40:17 | INFO  | Task d068a3f1-b98e-4fd0-8147-746894c05a1d (facts) was prepared for execution. 2026-02-04 00:40:17.832375 | orchestrator | 2026-02-04 00:40:17 | INFO  | It takes a moment until task d068a3f1-b98e-4fd0-8147-746894c05a1d (facts) has been started and output is visible here. 2026-02-04 00:40:29.699572 | orchestrator | 2026-02-04 00:40:29.699775 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 00:40:29.699799 | orchestrator | 2026-02-04 00:40:29.699812 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 00:40:29.699824 | orchestrator | Wednesday 04 February 2026 00:40:22 +0000 (0:00:00.253) 0:00:00.253 **** 2026-02-04 00:40:29.699835 | orchestrator | ok: [testbed-manager] 2026-02-04 00:40:29.699848 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:40:29.699860 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:40:29.699871 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:40:29.699907 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:29.699919 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:29.699930 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:29.699941 | orchestrator | 2026-02-04 00:40:29.699952 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 00:40:29.699963 | orchestrator | Wednesday 04 February 2026 00:40:23 +0000 (0:00:01.143) 0:00:01.396 **** 2026-02-04 00:40:29.699975 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:40:29.699987 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:40:29.700000 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:40:29.700012 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:40:29.700022 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:29.700034 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:29.700045 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:40:29.700056 | orchestrator | 2026-02-04 00:40:29.700067 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:40:29.700078 | orchestrator | 2026-02-04 00:40:29.700089 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:40:29.700101 | orchestrator | Wednesday 04 February 2026 00:40:24 +0000 (0:00:01.054) 0:00:02.451 **** 2026-02-04 00:40:29.700112 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:40:29.700125 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:40:29.700138 | orchestrator | ok: [testbed-manager] 2026-02-04 00:40:29.700152 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:40:29.700165 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:29.700178 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:29.700191 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:40:29.700203 | orchestrator | 2026-02-04 00:40:29.700216 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 00:40:29.700229 | orchestrator | 2026-02-04 00:40:29.700242 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 00:40:29.700255 | orchestrator | Wednesday 04 February 2026 00:40:29 +0000 (0:00:04.779) 0:00:07.231 **** 2026-02-04 00:40:29.700268 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:40:29.700281 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:40:29.700294 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:40:29.700307 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:40:29.700336 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:29.700349 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:29.700362 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:40:29.700375 | orchestrator | 2026-02-04 00:40:29.700388 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:40:29.700401 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700415 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700428 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700441 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700454 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700467 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700481 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:40:29.700494 | orchestrator | 2026-02-04 00:40:29.700505 | orchestrator | 2026-02-04 00:40:29.700517 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:40:29.700537 | orchestrator | Wednesday 04 February 2026 00:40:29 +0000 (0:00:00.446) 0:00:07.677 **** 2026-02-04 00:40:29.700548 | orchestrator | =============================================================================== 2026-02-04 00:40:29.700560 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.78s 2026-02-04 00:40:29.700571 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-02-04 00:40:29.700582 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.05s 2026-02-04 00:40:29.700593 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-02-04 00:40:31.902417 | orchestrator | 2026-02-04 00:40:31 | INFO  | Task 5083723e-9f85-436d-a0d8-694a5829124b (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-04 00:40:31.902499 | orchestrator | 2026-02-04 00:40:31 | INFO  | It takes a moment until task 5083723e-9f85-436d-a0d8-694a5829124b (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-04 00:40:42.101208 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 00:40:42.101324 | orchestrator | 2.16.14 2026-02-04 00:40:42.101343 | orchestrator | 2026-02-04 00:40:42.101357 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 00:40:42.101369 | orchestrator | 2026-02-04 00:40:42.101381 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:40:42.101394 | orchestrator | Wednesday 04 February 2026 00:40:35 +0000 (0:00:00.234) 0:00:00.234 **** 2026-02-04 00:40:42.101408 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 00:40:42.101420 | orchestrator | 2026-02-04 00:40:42.101432 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:40:42.101443 | orchestrator | Wednesday 04 February 2026 00:40:36 +0000 (0:00:00.220) 0:00:00.455 **** 2026-02-04 00:40:42.101454 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:42.101465 | orchestrator | 2026-02-04 00:40:42.101477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.101488 | orchestrator | Wednesday 04 February 2026 00:40:36 +0000 (0:00:00.203) 0:00:00.659 **** 2026-02-04 00:40:42.101523 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:40:42.101535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:40:42.101547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:40:42.101558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:40:42.101569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:40:42.101580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:40:42.101596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:40:42.101614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:40:42.101658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-04 00:40:42.101676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:40:42.101694 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:40:42.101712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:40:42.101742 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:40:42.101760 | orchestrator | 2026-02-04 00:40:42.101778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.101796 | orchestrator | Wednesday 04 February 2026 00:40:36 +0000 (0:00:00.498) 0:00:01.157 **** 2026-02-04 00:40:42.101839 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.101858 | orchestrator | 2026-02-04 00:40:42.101875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.101893 | orchestrator | Wednesday 04 February 2026 00:40:36 +0000 (0:00:00.179) 0:00:01.337 **** 2026-02-04 00:40:42.101912 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.101930 | orchestrator | 2026-02-04 00:40:42.101949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.101966 | orchestrator | Wednesday 04 February 2026 00:40:37 +0000 (0:00:00.160) 0:00:01.498 **** 2026-02-04 00:40:42.101983 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.102001 | orchestrator | 2026-02-04 00:40:42.102086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102110 | orchestrator | Wednesday 04 February 2026 00:40:37 +0000 (0:00:00.166) 0:00:01.664 **** 2026-02-04 00:40:42.102136 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.102156 | orchestrator | 2026-02-04 00:40:42.102175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102193 | orchestrator | Wednesday 04 February 2026 00:40:37 +0000 (0:00:00.175) 0:00:01.839 **** 2026-02-04 00:40:42.102212 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.102232 | orchestrator | 2026-02-04 00:40:42.102252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102271 | orchestrator | Wednesday 04 February 2026 00:40:37 +0000 (0:00:00.181) 0:00:02.021 **** 2026-02-04 00:40:42.102287 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.102298 | orchestrator | 2026-02-04 00:40:42.102309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102320 | orchestrator | Wednesday 04 February 2026 00:40:37 +0000 (0:00:00.178) 0:00:02.199 **** 2026-02-04 00:40:42.102331 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.102342 | orchestrator | 2026-02-04 00:40:42.102352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102363 | orchestrator | Wednesday 04 February 2026 00:40:37 +0000 (0:00:00.181) 0:00:02.381 **** 2026-02-04 00:40:42.102374 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.102385 | orchestrator | 2026-02-04 00:40:42.102397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102408 | orchestrator | Wednesday 04 February 2026 00:40:38 +0000 (0:00:00.169) 0:00:02.550 **** 2026-02-04 00:40:42.102420 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016) 2026-02-04 00:40:42.102432 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016) 2026-02-04 00:40:42.102443 | orchestrator | 2026-02-04 00:40:42.102454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102489 | orchestrator | Wednesday 04 February 2026 00:40:38 +0000 (0:00:00.374) 0:00:02.925 **** 2026-02-04 00:40:42.102501 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8) 2026-02-04 00:40:42.102512 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8) 2026-02-04 00:40:42.102523 | orchestrator | 2026-02-04 00:40:42.102534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102545 | orchestrator | Wednesday 04 February 2026 00:40:39 +0000 (0:00:00.526) 0:00:03.452 **** 2026-02-04 00:40:42.102555 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4) 2026-02-04 00:40:42.102566 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4) 2026-02-04 00:40:42.102577 | orchestrator | 2026-02-04 00:40:42.102588 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102599 | orchestrator | Wednesday 04 February 2026 00:40:39 +0000 (0:00:00.508) 0:00:03.961 **** 2026-02-04 00:40:42.102712 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81) 2026-02-04 00:40:42.102727 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81) 2026-02-04 00:40:42.102738 | orchestrator | 2026-02-04 00:40:42.102749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:42.102761 | orchestrator | Wednesday 04 February 2026 00:40:40 +0000 (0:00:00.624) 0:00:04.585 **** 2026-02-04 00:40:42.102772 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:40:42.102782 | orchestrator | 2026-02-04 00:40:42.102793 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.102804 | orchestrator | Wednesday 04 February 2026 00:40:40 +0000 (0:00:00.290) 0:00:04.876 **** 2026-02-04 00:40:42.102824 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:40:42.102836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:40:42.102847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:40:42.102858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:40:42.102869 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:40:42.102880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:40:42.102891 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:40:42.102901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:40:42.102912 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-04 00:40:42.102923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:40:42.102934 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:40:42.102945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:40:42.102955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:40:42.102967 | orchestrator | 2026-02-04 00:40:42.102978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.102989 | orchestrator | Wednesday 04 February 2026 00:40:40 +0000 (0:00:00.325) 0:00:05.201 **** 2026-02-04 00:40:42.103000 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103011 | orchestrator | 2026-02-04 00:40:42.103021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.103032 | orchestrator | Wednesday 04 February 2026 00:40:40 +0000 (0:00:00.181) 0:00:05.383 **** 2026-02-04 00:40:42.103043 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103054 | orchestrator | 2026-02-04 00:40:42.103065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.103076 | orchestrator | Wednesday 04 February 2026 00:40:41 +0000 (0:00:00.189) 0:00:05.573 **** 2026-02-04 00:40:42.103087 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103097 | orchestrator | 2026-02-04 00:40:42.103108 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.103119 | orchestrator | Wednesday 04 February 2026 00:40:41 +0000 (0:00:00.190) 0:00:05.764 **** 2026-02-04 00:40:42.103130 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103141 | orchestrator | 2026-02-04 00:40:42.103152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.103164 | orchestrator | Wednesday 04 February 2026 00:40:41 +0000 (0:00:00.166) 0:00:05.930 **** 2026-02-04 00:40:42.103175 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103200 | orchestrator | 2026-02-04 00:40:42.103219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.103237 | orchestrator | Wednesday 04 February 2026 00:40:41 +0000 (0:00:00.188) 0:00:06.118 **** 2026-02-04 00:40:42.103256 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103275 | orchestrator | 2026-02-04 00:40:42.103293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:42.103312 | orchestrator | Wednesday 04 February 2026 00:40:41 +0000 (0:00:00.197) 0:00:06.315 **** 2026-02-04 00:40:42.103329 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:42.103374 | orchestrator | 2026-02-04 00:40:42.103409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:49.460456 | orchestrator | Wednesday 04 February 2026 00:40:42 +0000 (0:00:00.192) 0:00:06.508 **** 2026-02-04 00:40:49.460567 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.460585 | orchestrator | 2026-02-04 00:40:49.460598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:49.460611 | orchestrator | Wednesday 04 February 2026 00:40:42 +0000 (0:00:00.180) 0:00:06.688 **** 2026-02-04 00:40:49.460661 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-04 00:40:49.460674 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-04 00:40:49.460686 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-04 00:40:49.460697 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-04 00:40:49.460708 | orchestrator | 2026-02-04 00:40:49.460720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:49.460731 | orchestrator | Wednesday 04 February 2026 00:40:43 +0000 (0:00:00.912) 0:00:07.601 **** 2026-02-04 00:40:49.460742 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.460754 | orchestrator | 2026-02-04 00:40:49.460765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:49.460776 | orchestrator | Wednesday 04 February 2026 00:40:43 +0000 (0:00:00.188) 0:00:07.789 **** 2026-02-04 00:40:49.460787 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.460798 | orchestrator | 2026-02-04 00:40:49.460809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:49.460820 | orchestrator | Wednesday 04 February 2026 00:40:43 +0000 (0:00:00.183) 0:00:07.972 **** 2026-02-04 00:40:49.460831 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.460842 | orchestrator | 2026-02-04 00:40:49.460853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:49.460864 | orchestrator | Wednesday 04 February 2026 00:40:43 +0000 (0:00:00.200) 0:00:08.173 **** 2026-02-04 00:40:49.460875 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.460886 | orchestrator | 2026-02-04 00:40:49.460897 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 00:40:49.460908 | orchestrator | Wednesday 04 February 2026 00:40:43 +0000 (0:00:00.186) 0:00:08.359 **** 2026-02-04 00:40:49.460919 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-04 00:40:49.460930 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-04 00:40:49.460941 | orchestrator | 2026-02-04 00:40:49.460953 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 00:40:49.460963 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.161) 0:00:08.521 **** 2026-02-04 00:40:49.460974 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.460985 | orchestrator | 2026-02-04 00:40:49.460999 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 00:40:49.461031 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.124) 0:00:08.645 **** 2026-02-04 00:40:49.461044 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461057 | orchestrator | 2026-02-04 00:40:49.461070 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 00:40:49.461084 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.150) 0:00:08.796 **** 2026-02-04 00:40:49.461118 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461131 | orchestrator | 2026-02-04 00:40:49.461147 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 00:40:49.461167 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.135) 0:00:08.932 **** 2026-02-04 00:40:49.461185 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:49.461198 | orchestrator | 2026-02-04 00:40:49.461211 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 00:40:49.461224 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.146) 0:00:09.079 **** 2026-02-04 00:40:49.461238 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29c6bc8c-f904-55ca-809f-6429b65a49e4'}}) 2026-02-04 00:40:49.461252 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b7fb365-e96c-53e1-a018-1a0a8a845031'}}) 2026-02-04 00:40:49.461265 | orchestrator | 2026-02-04 00:40:49.461276 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 00:40:49.461287 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.167) 0:00:09.247 **** 2026-02-04 00:40:49.461303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29c6bc8c-f904-55ca-809f-6429b65a49e4'}})  2026-02-04 00:40:49.461327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b7fb365-e96c-53e1-a018-1a0a8a845031'}})  2026-02-04 00:40:49.461339 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461352 | orchestrator | 2026-02-04 00:40:49.461370 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 00:40:49.461381 | orchestrator | Wednesday 04 February 2026 00:40:44 +0000 (0:00:00.136) 0:00:09.383 **** 2026-02-04 00:40:49.461392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29c6bc8c-f904-55ca-809f-6429b65a49e4'}})  2026-02-04 00:40:49.461404 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b7fb365-e96c-53e1-a018-1a0a8a845031'}})  2026-02-04 00:40:49.461415 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461426 | orchestrator | 2026-02-04 00:40:49.461437 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 00:40:49.461448 | orchestrator | Wednesday 04 February 2026 00:40:45 +0000 (0:00:00.313) 0:00:09.697 **** 2026-02-04 00:40:49.461459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29c6bc8c-f904-55ca-809f-6429b65a49e4'}})  2026-02-04 00:40:49.461487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b7fb365-e96c-53e1-a018-1a0a8a845031'}})  2026-02-04 00:40:49.461499 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461510 | orchestrator | 2026-02-04 00:40:49.461521 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 00:40:49.461532 | orchestrator | Wednesday 04 February 2026 00:40:45 +0000 (0:00:00.161) 0:00:09.858 **** 2026-02-04 00:40:49.461543 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:49.461554 | orchestrator | 2026-02-04 00:40:49.461565 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 00:40:49.461576 | orchestrator | Wednesday 04 February 2026 00:40:45 +0000 (0:00:00.147) 0:00:10.005 **** 2026-02-04 00:40:49.461587 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:40:49.461598 | orchestrator | 2026-02-04 00:40:49.461615 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 00:40:49.461644 | orchestrator | Wednesday 04 February 2026 00:40:45 +0000 (0:00:00.149) 0:00:10.154 **** 2026-02-04 00:40:49.461655 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461666 | orchestrator | 2026-02-04 00:40:49.461677 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 00:40:49.461688 | orchestrator | Wednesday 04 February 2026 00:40:45 +0000 (0:00:00.129) 0:00:10.284 **** 2026-02-04 00:40:49.461707 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461718 | orchestrator | 2026-02-04 00:40:49.461729 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 00:40:49.461741 | orchestrator | Wednesday 04 February 2026 00:40:46 +0000 (0:00:00.137) 0:00:10.421 **** 2026-02-04 00:40:49.461752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461763 | orchestrator | 2026-02-04 00:40:49.461774 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 00:40:49.461785 | orchestrator | Wednesday 04 February 2026 00:40:46 +0000 (0:00:00.142) 0:00:10.564 **** 2026-02-04 00:40:49.461795 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:40:49.461807 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:40:49.461818 | orchestrator |  "sdb": { 2026-02-04 00:40:49.461829 | orchestrator |  "osd_lvm_uuid": "29c6bc8c-f904-55ca-809f-6429b65a49e4" 2026-02-04 00:40:49.461840 | orchestrator |  }, 2026-02-04 00:40:49.461851 | orchestrator |  "sdc": { 2026-02-04 00:40:49.461862 | orchestrator |  "osd_lvm_uuid": "1b7fb365-e96c-53e1-a018-1a0a8a845031" 2026-02-04 00:40:49.461873 | orchestrator |  } 2026-02-04 00:40:49.461884 | orchestrator |  } 2026-02-04 00:40:49.461896 | orchestrator | } 2026-02-04 00:40:49.461907 | orchestrator | 2026-02-04 00:40:49.461918 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 00:40:49.461929 | orchestrator | Wednesday 04 February 2026 00:40:46 +0000 (0:00:00.139) 0:00:10.703 **** 2026-02-04 00:40:49.461940 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461951 | orchestrator | 2026-02-04 00:40:49.461962 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 00:40:49.461973 | orchestrator | Wednesday 04 February 2026 00:40:46 +0000 (0:00:00.146) 0:00:10.850 **** 2026-02-04 00:40:49.461984 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.461995 | orchestrator | 2026-02-04 00:40:49.462006 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 00:40:49.462070 | orchestrator | Wednesday 04 February 2026 00:40:46 +0000 (0:00:00.141) 0:00:10.992 **** 2026-02-04 00:40:49.462084 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:40:49.462095 | orchestrator | 2026-02-04 00:40:49.462106 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 00:40:49.462118 | orchestrator | Wednesday 04 February 2026 00:40:46 +0000 (0:00:00.130) 0:00:11.122 **** 2026-02-04 00:40:49.462128 | orchestrator | changed: [testbed-node-3] => { 2026-02-04 00:40:49.462140 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 00:40:49.462152 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:40:49.462163 | orchestrator |  "sdb": { 2026-02-04 00:40:49.462174 | orchestrator |  "osd_lvm_uuid": "29c6bc8c-f904-55ca-809f-6429b65a49e4" 2026-02-04 00:40:49.462185 | orchestrator |  }, 2026-02-04 00:40:49.462196 | orchestrator |  "sdc": { 2026-02-04 00:40:49.462207 | orchestrator |  "osd_lvm_uuid": "1b7fb365-e96c-53e1-a018-1a0a8a845031" 2026-02-04 00:40:49.462218 | orchestrator |  } 2026-02-04 00:40:49.462229 | orchestrator |  }, 2026-02-04 00:40:49.462240 | orchestrator |  "lvm_volumes": [ 2026-02-04 00:40:49.462251 | orchestrator |  { 2026-02-04 00:40:49.462262 | orchestrator |  "data": "osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4", 2026-02-04 00:40:49.462273 | orchestrator |  "data_vg": "ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4" 2026-02-04 00:40:49.462284 | orchestrator |  }, 2026-02-04 00:40:49.462295 | orchestrator |  { 2026-02-04 00:40:49.462306 | orchestrator |  "data": "osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031", 2026-02-04 00:40:49.462317 | orchestrator |  "data_vg": "ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031" 2026-02-04 00:40:49.462328 | orchestrator |  } 2026-02-04 00:40:49.462338 | orchestrator |  ] 2026-02-04 00:40:49.462349 | orchestrator |  } 2026-02-04 00:40:49.462360 | orchestrator | } 2026-02-04 00:40:49.462378 | orchestrator | 2026-02-04 00:40:49.462389 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 00:40:49.462400 | orchestrator | Wednesday 04 February 2026 00:40:47 +0000 (0:00:00.355) 0:00:11.477 **** 2026-02-04 00:40:49.462411 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 00:40:49.462423 | orchestrator | 2026-02-04 00:40:49.462440 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 00:40:49.462451 | orchestrator | 2026-02-04 00:40:49.462462 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:40:49.462473 | orchestrator | Wednesday 04 February 2026 00:40:48 +0000 (0:00:01.911) 0:00:13.389 **** 2026-02-04 00:40:49.462484 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 00:40:49.462495 | orchestrator | 2026-02-04 00:40:49.462506 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:40:49.462517 | orchestrator | Wednesday 04 February 2026 00:40:49 +0000 (0:00:00.244) 0:00:13.633 **** 2026-02-04 00:40:49.462528 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:40:49.462539 | orchestrator | 2026-02-04 00:40:49.462559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.341695 | orchestrator | Wednesday 04 February 2026 00:40:49 +0000 (0:00:00.232) 0:00:13.866 **** 2026-02-04 00:40:56.341805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:40:56.341822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:40:56.341834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:40:56.341845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:40:56.341857 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:40:56.341868 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:40:56.341879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:40:56.341890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:40:56.341901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-04 00:40:56.341912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:40:56.341923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:40:56.341934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:40:56.341950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:40:56.341962 | orchestrator | 2026-02-04 00:40:56.341974 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.341986 | orchestrator | Wednesday 04 February 2026 00:40:49 +0000 (0:00:00.387) 0:00:14.253 **** 2026-02-04 00:40:56.341997 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342009 | orchestrator | 2026-02-04 00:40:56.342094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342114 | orchestrator | Wednesday 04 February 2026 00:40:50 +0000 (0:00:00.210) 0:00:14.464 **** 2026-02-04 00:40:56.342132 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342148 | orchestrator | 2026-02-04 00:40:56.342167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342185 | orchestrator | Wednesday 04 February 2026 00:40:50 +0000 (0:00:00.184) 0:00:14.649 **** 2026-02-04 00:40:56.342205 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342224 | orchestrator | 2026-02-04 00:40:56.342243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342261 | orchestrator | Wednesday 04 February 2026 00:40:50 +0000 (0:00:00.197) 0:00:14.846 **** 2026-02-04 00:40:56.342312 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342333 | orchestrator | 2026-02-04 00:40:56.342352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342371 | orchestrator | Wednesday 04 February 2026 00:40:50 +0000 (0:00:00.183) 0:00:15.030 **** 2026-02-04 00:40:56.342391 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342410 | orchestrator | 2026-02-04 00:40:56.342429 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342448 | orchestrator | Wednesday 04 February 2026 00:40:51 +0000 (0:00:00.516) 0:00:15.547 **** 2026-02-04 00:40:56.342467 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342485 | orchestrator | 2026-02-04 00:40:56.342504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342522 | orchestrator | Wednesday 04 February 2026 00:40:51 +0000 (0:00:00.177) 0:00:15.724 **** 2026-02-04 00:40:56.342541 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342559 | orchestrator | 2026-02-04 00:40:56.342576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342595 | orchestrator | Wednesday 04 February 2026 00:40:51 +0000 (0:00:00.186) 0:00:15.911 **** 2026-02-04 00:40:56.342613 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.342698 | orchestrator | 2026-02-04 00:40:56.342739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342758 | orchestrator | Wednesday 04 February 2026 00:40:51 +0000 (0:00:00.170) 0:00:16.082 **** 2026-02-04 00:40:56.342776 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004) 2026-02-04 00:40:56.342797 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004) 2026-02-04 00:40:56.342816 | orchestrator | 2026-02-04 00:40:56.342834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342852 | orchestrator | Wednesday 04 February 2026 00:40:52 +0000 (0:00:00.367) 0:00:16.450 **** 2026-02-04 00:40:56.342872 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74) 2026-02-04 00:40:56.342890 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74) 2026-02-04 00:40:56.342909 | orchestrator | 2026-02-04 00:40:56.342927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.342947 | orchestrator | Wednesday 04 February 2026 00:40:52 +0000 (0:00:00.367) 0:00:16.818 **** 2026-02-04 00:40:56.342963 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a) 2026-02-04 00:40:56.342980 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a) 2026-02-04 00:40:56.342998 | orchestrator | 2026-02-04 00:40:56.343017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.343065 | orchestrator | Wednesday 04 February 2026 00:40:52 +0000 (0:00:00.366) 0:00:17.184 **** 2026-02-04 00:40:56.343083 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde) 2026-02-04 00:40:56.343099 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde) 2026-02-04 00:40:56.343117 | orchestrator | 2026-02-04 00:40:56.343135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:40:56.343154 | orchestrator | Wednesday 04 February 2026 00:40:53 +0000 (0:00:00.384) 0:00:17.569 **** 2026-02-04 00:40:56.343172 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:40:56.343190 | orchestrator | 2026-02-04 00:40:56.343208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343226 | orchestrator | Wednesday 04 February 2026 00:40:53 +0000 (0:00:00.299) 0:00:17.868 **** 2026-02-04 00:40:56.343244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:40:56.343282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:40:56.343302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:40:56.343320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:40:56.343339 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:40:56.343357 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:40:56.343375 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:40:56.343394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:40:56.343412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-04 00:40:56.343431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:40:56.343450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:40:56.343468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:40:56.343485 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:40:56.343502 | orchestrator | 2026-02-04 00:40:56.343520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343538 | orchestrator | Wednesday 04 February 2026 00:40:53 +0000 (0:00:00.345) 0:00:18.214 **** 2026-02-04 00:40:56.343557 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.343576 | orchestrator | 2026-02-04 00:40:56.343594 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343613 | orchestrator | Wednesday 04 February 2026 00:40:54 +0000 (0:00:00.511) 0:00:18.725 **** 2026-02-04 00:40:56.343663 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.343682 | orchestrator | 2026-02-04 00:40:56.343700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343717 | orchestrator | Wednesday 04 February 2026 00:40:54 +0000 (0:00:00.185) 0:00:18.911 **** 2026-02-04 00:40:56.343736 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.343754 | orchestrator | 2026-02-04 00:40:56.343772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343791 | orchestrator | Wednesday 04 February 2026 00:40:54 +0000 (0:00:00.178) 0:00:19.089 **** 2026-02-04 00:40:56.343820 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.343839 | orchestrator | 2026-02-04 00:40:56.343856 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343874 | orchestrator | Wednesday 04 February 2026 00:40:54 +0000 (0:00:00.179) 0:00:19.268 **** 2026-02-04 00:40:56.343893 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.343910 | orchestrator | 2026-02-04 00:40:56.343929 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.343947 | orchestrator | Wednesday 04 February 2026 00:40:55 +0000 (0:00:00.180) 0:00:19.449 **** 2026-02-04 00:40:56.343965 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.343983 | orchestrator | 2026-02-04 00:40:56.344002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.344019 | orchestrator | Wednesday 04 February 2026 00:40:55 +0000 (0:00:00.184) 0:00:19.633 **** 2026-02-04 00:40:56.344037 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.344055 | orchestrator | 2026-02-04 00:40:56.344075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.344092 | orchestrator | Wednesday 04 February 2026 00:40:55 +0000 (0:00:00.172) 0:00:19.806 **** 2026-02-04 00:40:56.344109 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:40:56.344140 | orchestrator | 2026-02-04 00:40:56.344157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.344177 | orchestrator | Wednesday 04 February 2026 00:40:55 +0000 (0:00:00.171) 0:00:19.977 **** 2026-02-04 00:40:56.344195 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-04 00:40:56.344215 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-04 00:40:56.344235 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-04 00:40:56.344253 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-04 00:40:56.344272 | orchestrator | 2026-02-04 00:40:56.344290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:40:56.344310 | orchestrator | Wednesday 04 February 2026 00:40:56 +0000 (0:00:00.591) 0:00:20.569 **** 2026-02-04 00:40:56.344329 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.007974 | orchestrator | 2026-02-04 00:41:02.008088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:02.008124 | orchestrator | Wednesday 04 February 2026 00:40:56 +0000 (0:00:00.182) 0:00:20.751 **** 2026-02-04 00:41:02.008137 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008150 | orchestrator | 2026-02-04 00:41:02.008162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:02.008174 | orchestrator | Wednesday 04 February 2026 00:40:56 +0000 (0:00:00.183) 0:00:20.934 **** 2026-02-04 00:41:02.008185 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008196 | orchestrator | 2026-02-04 00:41:02.008207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:02.008218 | orchestrator | Wednesday 04 February 2026 00:40:56 +0000 (0:00:00.178) 0:00:21.113 **** 2026-02-04 00:41:02.008230 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008241 | orchestrator | 2026-02-04 00:41:02.008252 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 00:41:02.008263 | orchestrator | Wednesday 04 February 2026 00:40:57 +0000 (0:00:00.569) 0:00:21.682 **** 2026-02-04 00:41:02.008274 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-04 00:41:02.008286 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-04 00:41:02.008297 | orchestrator | 2026-02-04 00:41:02.008308 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 00:41:02.008319 | orchestrator | Wednesday 04 February 2026 00:40:57 +0000 (0:00:00.149) 0:00:21.832 **** 2026-02-04 00:41:02.008330 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008341 | orchestrator | 2026-02-04 00:41:02.008352 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 00:41:02.008364 | orchestrator | Wednesday 04 February 2026 00:40:57 +0000 (0:00:00.115) 0:00:21.948 **** 2026-02-04 00:41:02.008375 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008386 | orchestrator | 2026-02-04 00:41:02.008397 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 00:41:02.008408 | orchestrator | Wednesday 04 February 2026 00:40:57 +0000 (0:00:00.131) 0:00:22.080 **** 2026-02-04 00:41:02.008419 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008430 | orchestrator | 2026-02-04 00:41:02.008441 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 00:41:02.008452 | orchestrator | Wednesday 04 February 2026 00:40:57 +0000 (0:00:00.154) 0:00:22.235 **** 2026-02-04 00:41:02.008463 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:41:02.008475 | orchestrator | 2026-02-04 00:41:02.008486 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 00:41:02.008498 | orchestrator | Wednesday 04 February 2026 00:40:57 +0000 (0:00:00.169) 0:00:22.404 **** 2026-02-04 00:41:02.008509 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}}) 2026-02-04 00:41:02.008521 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}}) 2026-02-04 00:41:02.008558 | orchestrator | 2026-02-04 00:41:02.008569 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 00:41:02.008580 | orchestrator | Wednesday 04 February 2026 00:40:58 +0000 (0:00:00.151) 0:00:22.556 **** 2026-02-04 00:41:02.008592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}})  2026-02-04 00:41:02.008604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}})  2026-02-04 00:41:02.008647 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008668 | orchestrator | 2026-02-04 00:41:02.008687 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 00:41:02.008705 | orchestrator | Wednesday 04 February 2026 00:40:58 +0000 (0:00:00.127) 0:00:22.684 **** 2026-02-04 00:41:02.008721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}})  2026-02-04 00:41:02.008750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}})  2026-02-04 00:41:02.008762 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008773 | orchestrator | 2026-02-04 00:41:02.008784 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 00:41:02.008795 | orchestrator | Wednesday 04 February 2026 00:40:58 +0000 (0:00:00.208) 0:00:22.892 **** 2026-02-04 00:41:02.008807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}})  2026-02-04 00:41:02.008818 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}})  2026-02-04 00:41:02.008830 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008841 | orchestrator | 2026-02-04 00:41:02.008852 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 00:41:02.008863 | orchestrator | Wednesday 04 February 2026 00:40:58 +0000 (0:00:00.133) 0:00:23.026 **** 2026-02-04 00:41:02.008874 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:41:02.008885 | orchestrator | 2026-02-04 00:41:02.008896 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 00:41:02.008907 | orchestrator | Wednesday 04 February 2026 00:40:58 +0000 (0:00:00.112) 0:00:23.139 **** 2026-02-04 00:41:02.008918 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:41:02.008929 | orchestrator | 2026-02-04 00:41:02.008940 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 00:41:02.008951 | orchestrator | Wednesday 04 February 2026 00:40:58 +0000 (0:00:00.117) 0:00:23.256 **** 2026-02-04 00:41:02.008981 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.008993 | orchestrator | 2026-02-04 00:41:02.009005 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 00:41:02.009015 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.274) 0:00:23.531 **** 2026-02-04 00:41:02.009026 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.009037 | orchestrator | 2026-02-04 00:41:02.009049 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 00:41:02.009059 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.139) 0:00:23.670 **** 2026-02-04 00:41:02.009070 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.009082 | orchestrator | 2026-02-04 00:41:02.009092 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 00:41:02.009104 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.142) 0:00:23.813 **** 2026-02-04 00:41:02.009115 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:41:02.009126 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:41:02.009137 | orchestrator |  "sdb": { 2026-02-04 00:41:02.009148 | orchestrator |  "osd_lvm_uuid": "6fbd78c3-b583-5fde-80ba-0c2cdf325dc7" 2026-02-04 00:41:02.009160 | orchestrator |  }, 2026-02-04 00:41:02.009181 | orchestrator |  "sdc": { 2026-02-04 00:41:02.009192 | orchestrator |  "osd_lvm_uuid": "c6467dc2-49cb-511a-ae45-cb6bd8ce65cd" 2026-02-04 00:41:02.009203 | orchestrator |  } 2026-02-04 00:41:02.009214 | orchestrator |  } 2026-02-04 00:41:02.009226 | orchestrator | } 2026-02-04 00:41:02.009237 | orchestrator | 2026-02-04 00:41:02.009248 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 00:41:02.009259 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.148) 0:00:23.962 **** 2026-02-04 00:41:02.009270 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.009281 | orchestrator | 2026-02-04 00:41:02.009292 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 00:41:02.009303 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.115) 0:00:24.078 **** 2026-02-04 00:41:02.009331 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.009342 | orchestrator | 2026-02-04 00:41:02.009353 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 00:41:02.009364 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.119) 0:00:24.198 **** 2026-02-04 00:41:02.009375 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:41:02.009386 | orchestrator | 2026-02-04 00:41:02.009397 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 00:41:02.009408 | orchestrator | Wednesday 04 February 2026 00:40:59 +0000 (0:00:00.102) 0:00:24.300 **** 2026-02-04 00:41:02.009419 | orchestrator | changed: [testbed-node-4] => { 2026-02-04 00:41:02.009430 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 00:41:02.009442 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:41:02.009453 | orchestrator |  "sdb": { 2026-02-04 00:41:02.009464 | orchestrator |  "osd_lvm_uuid": "6fbd78c3-b583-5fde-80ba-0c2cdf325dc7" 2026-02-04 00:41:02.009475 | orchestrator |  }, 2026-02-04 00:41:02.009486 | orchestrator |  "sdc": { 2026-02-04 00:41:02.009497 | orchestrator |  "osd_lvm_uuid": "c6467dc2-49cb-511a-ae45-cb6bd8ce65cd" 2026-02-04 00:41:02.009508 | orchestrator |  } 2026-02-04 00:41:02.009519 | orchestrator |  }, 2026-02-04 00:41:02.009530 | orchestrator |  "lvm_volumes": [ 2026-02-04 00:41:02.009541 | orchestrator |  { 2026-02-04 00:41:02.009553 | orchestrator |  "data": "osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7", 2026-02-04 00:41:02.009564 | orchestrator |  "data_vg": "ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7" 2026-02-04 00:41:02.009575 | orchestrator |  }, 2026-02-04 00:41:02.009586 | orchestrator |  { 2026-02-04 00:41:02.009596 | orchestrator |  "data": "osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd", 2026-02-04 00:41:02.009608 | orchestrator |  "data_vg": "ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd" 2026-02-04 00:41:02.009639 | orchestrator |  } 2026-02-04 00:41:02.009651 | orchestrator |  ] 2026-02-04 00:41:02.009662 | orchestrator |  } 2026-02-04 00:41:02.009674 | orchestrator | } 2026-02-04 00:41:02.009685 | orchestrator | 2026-02-04 00:41:02.009696 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 00:41:02.009707 | orchestrator | Wednesday 04 February 2026 00:41:00 +0000 (0:00:00.169) 0:00:24.470 **** 2026-02-04 00:41:02.009719 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 00:41:02.009730 | orchestrator | 2026-02-04 00:41:02.009741 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 00:41:02.009752 | orchestrator | 2026-02-04 00:41:02.009763 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:41:02.009774 | orchestrator | Wednesday 04 February 2026 00:41:00 +0000 (0:00:00.899) 0:00:25.369 **** 2026-02-04 00:41:02.009785 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 00:41:02.009797 | orchestrator | 2026-02-04 00:41:02.009808 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:41:02.009826 | orchestrator | Wednesday 04 February 2026 00:41:01 +0000 (0:00:00.481) 0:00:25.850 **** 2026-02-04 00:41:02.009837 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:41:02.009849 | orchestrator | 2026-02-04 00:41:02.009860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:02.009871 | orchestrator | Wednesday 04 February 2026 00:41:01 +0000 (0:00:00.230) 0:00:26.081 **** 2026-02-04 00:41:02.009882 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:41:02.009893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:41:02.009910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:41:02.009921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:41:02.009932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:41:02.009950 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:41:08.637670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:41:08.637778 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:41:08.637794 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-04 00:41:08.637806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:41:08.637817 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:41:08.637829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:41:08.637840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:41:08.637851 | orchestrator | 2026-02-04 00:41:08.637863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.637888 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.332) 0:00:26.413 **** 2026-02-04 00:41:08.637900 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.637912 | orchestrator | 2026-02-04 00:41:08.637924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.637935 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.177) 0:00:26.591 **** 2026-02-04 00:41:08.637946 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.637957 | orchestrator | 2026-02-04 00:41:08.637968 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.637979 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.172) 0:00:26.763 **** 2026-02-04 00:41:08.637990 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638001 | orchestrator | 2026-02-04 00:41:08.638012 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638082 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.169) 0:00:26.933 **** 2026-02-04 00:41:08.638094 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638105 | orchestrator | 2026-02-04 00:41:08.638116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638127 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.137) 0:00:27.070 **** 2026-02-04 00:41:08.638138 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638149 | orchestrator | 2026-02-04 00:41:08.638163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638176 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.144) 0:00:27.215 **** 2026-02-04 00:41:08.638189 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638202 | orchestrator | 2026-02-04 00:41:08.638216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638229 | orchestrator | Wednesday 04 February 2026 00:41:02 +0000 (0:00:00.148) 0:00:27.364 **** 2026-02-04 00:41:08.638265 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638279 | orchestrator | 2026-02-04 00:41:08.638292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638305 | orchestrator | Wednesday 04 February 2026 00:41:03 +0000 (0:00:00.145) 0:00:27.509 **** 2026-02-04 00:41:08.638318 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638331 | orchestrator | 2026-02-04 00:41:08.638344 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638358 | orchestrator | Wednesday 04 February 2026 00:41:03 +0000 (0:00:00.159) 0:00:27.669 **** 2026-02-04 00:41:08.638371 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3) 2026-02-04 00:41:08.638386 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3) 2026-02-04 00:41:08.638399 | orchestrator | 2026-02-04 00:41:08.638412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638425 | orchestrator | Wednesday 04 February 2026 00:41:03 +0000 (0:00:00.607) 0:00:28.276 **** 2026-02-04 00:41:08.638438 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e) 2026-02-04 00:41:08.638451 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e) 2026-02-04 00:41:08.638464 | orchestrator | 2026-02-04 00:41:08.638477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638490 | orchestrator | Wednesday 04 February 2026 00:41:04 +0000 (0:00:00.309) 0:00:28.586 **** 2026-02-04 00:41:08.638503 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a) 2026-02-04 00:41:08.638516 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a) 2026-02-04 00:41:08.638527 | orchestrator | 2026-02-04 00:41:08.638538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638549 | orchestrator | Wednesday 04 February 2026 00:41:04 +0000 (0:00:00.342) 0:00:28.929 **** 2026-02-04 00:41:08.638560 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59) 2026-02-04 00:41:08.638571 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59) 2026-02-04 00:41:08.638582 | orchestrator | 2026-02-04 00:41:08.638593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:41:08.638604 | orchestrator | Wednesday 04 February 2026 00:41:04 +0000 (0:00:00.361) 0:00:29.290 **** 2026-02-04 00:41:08.638636 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:41:08.638647 | orchestrator | 2026-02-04 00:41:08.638658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.638687 | orchestrator | Wednesday 04 February 2026 00:41:05 +0000 (0:00:00.283) 0:00:29.574 **** 2026-02-04 00:41:08.638698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:41:08.638709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:41:08.638724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:41:08.638742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:41:08.638759 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:41:08.638778 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:41:08.638796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:41:08.638815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:41:08.638847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-04 00:41:08.638861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:41:08.638872 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:41:08.638901 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:41:08.638913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:41:08.638924 | orchestrator | 2026-02-04 00:41:08.638935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.638946 | orchestrator | Wednesday 04 February 2026 00:41:05 +0000 (0:00:00.345) 0:00:29.919 **** 2026-02-04 00:41:08.638957 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.638967 | orchestrator | 2026-02-04 00:41:08.638978 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.638989 | orchestrator | Wednesday 04 February 2026 00:41:05 +0000 (0:00:00.169) 0:00:30.089 **** 2026-02-04 00:41:08.639000 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639010 | orchestrator | 2026-02-04 00:41:08.639021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639032 | orchestrator | Wednesday 04 February 2026 00:41:05 +0000 (0:00:00.163) 0:00:30.252 **** 2026-02-04 00:41:08.639048 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639059 | orchestrator | 2026-02-04 00:41:08.639070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639081 | orchestrator | Wednesday 04 February 2026 00:41:06 +0000 (0:00:00.191) 0:00:30.444 **** 2026-02-04 00:41:08.639092 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639103 | orchestrator | 2026-02-04 00:41:08.639114 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639125 | orchestrator | Wednesday 04 February 2026 00:41:06 +0000 (0:00:00.177) 0:00:30.621 **** 2026-02-04 00:41:08.639136 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639147 | orchestrator | 2026-02-04 00:41:08.639158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639168 | orchestrator | Wednesday 04 February 2026 00:41:06 +0000 (0:00:00.228) 0:00:30.850 **** 2026-02-04 00:41:08.639179 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639190 | orchestrator | 2026-02-04 00:41:08.639201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639212 | orchestrator | Wednesday 04 February 2026 00:41:06 +0000 (0:00:00.521) 0:00:31.371 **** 2026-02-04 00:41:08.639223 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639233 | orchestrator | 2026-02-04 00:41:08.639244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639255 | orchestrator | Wednesday 04 February 2026 00:41:07 +0000 (0:00:00.198) 0:00:31.570 **** 2026-02-04 00:41:08.639266 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639277 | orchestrator | 2026-02-04 00:41:08.639288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639299 | orchestrator | Wednesday 04 February 2026 00:41:07 +0000 (0:00:00.167) 0:00:31.737 **** 2026-02-04 00:41:08.639310 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-04 00:41:08.639321 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-04 00:41:08.639332 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-04 00:41:08.639343 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-04 00:41:08.639354 | orchestrator | 2026-02-04 00:41:08.639365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639376 | orchestrator | Wednesday 04 February 2026 00:41:07 +0000 (0:00:00.583) 0:00:32.320 **** 2026-02-04 00:41:08.639387 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639398 | orchestrator | 2026-02-04 00:41:08.639416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639427 | orchestrator | Wednesday 04 February 2026 00:41:08 +0000 (0:00:00.186) 0:00:32.506 **** 2026-02-04 00:41:08.639438 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639449 | orchestrator | 2026-02-04 00:41:08.639460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639471 | orchestrator | Wednesday 04 February 2026 00:41:08 +0000 (0:00:00.175) 0:00:32.682 **** 2026-02-04 00:41:08.639481 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639492 | orchestrator | 2026-02-04 00:41:08.639503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:41:08.639514 | orchestrator | Wednesday 04 February 2026 00:41:08 +0000 (0:00:00.195) 0:00:32.877 **** 2026-02-04 00:41:08.639525 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:08.639536 | orchestrator | 2026-02-04 00:41:08.639562 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 00:41:12.248554 | orchestrator | Wednesday 04 February 2026 00:41:08 +0000 (0:00:00.165) 0:00:33.043 **** 2026-02-04 00:41:12.248724 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-04 00:41:12.248741 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-04 00:41:12.248754 | orchestrator | 2026-02-04 00:41:12.248767 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 00:41:12.248779 | orchestrator | Wednesday 04 February 2026 00:41:08 +0000 (0:00:00.155) 0:00:33.198 **** 2026-02-04 00:41:12.248791 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.248803 | orchestrator | 2026-02-04 00:41:12.248815 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 00:41:12.248826 | orchestrator | Wednesday 04 February 2026 00:41:08 +0000 (0:00:00.102) 0:00:33.300 **** 2026-02-04 00:41:12.248838 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.248850 | orchestrator | 2026-02-04 00:41:12.248861 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 00:41:12.248872 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.117) 0:00:33.417 **** 2026-02-04 00:41:12.248884 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.248895 | orchestrator | 2026-02-04 00:41:12.248907 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 00:41:12.248918 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.304) 0:00:33.722 **** 2026-02-04 00:41:12.248930 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:41:12.248942 | orchestrator | 2026-02-04 00:41:12.248954 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 00:41:12.248967 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.113) 0:00:33.836 **** 2026-02-04 00:41:12.248979 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}}) 2026-02-04 00:41:12.248992 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5659fb6c-b6d6-5368-9f3c-0e525a1333df'}}) 2026-02-04 00:41:12.249003 | orchestrator | 2026-02-04 00:41:12.249015 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 00:41:12.249026 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.138) 0:00:33.975 **** 2026-02-04 00:41:12.249038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}})  2026-02-04 00:41:12.249052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5659fb6c-b6d6-5368-9f3c-0e525a1333df'}})  2026-02-04 00:41:12.249064 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249075 | orchestrator | 2026-02-04 00:41:12.249087 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 00:41:12.249099 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.144) 0:00:34.119 **** 2026-02-04 00:41:12.249110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}})  2026-02-04 00:41:12.249146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5659fb6c-b6d6-5368-9f3c-0e525a1333df'}})  2026-02-04 00:41:12.249159 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249170 | orchestrator | 2026-02-04 00:41:12.249182 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 00:41:12.249194 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.122) 0:00:34.242 **** 2026-02-04 00:41:12.249205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}})  2026-02-04 00:41:12.249217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5659fb6c-b6d6-5368-9f3c-0e525a1333df'}})  2026-02-04 00:41:12.249229 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249241 | orchestrator | 2026-02-04 00:41:12.249252 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 00:41:12.249264 | orchestrator | Wednesday 04 February 2026 00:41:09 +0000 (0:00:00.133) 0:00:34.376 **** 2026-02-04 00:41:12.249275 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:41:12.249287 | orchestrator | 2026-02-04 00:41:12.249298 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 00:41:12.249310 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.130) 0:00:34.507 **** 2026-02-04 00:41:12.249321 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:41:12.249333 | orchestrator | 2026-02-04 00:41:12.249363 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 00:41:12.249374 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.115) 0:00:34.622 **** 2026-02-04 00:41:12.249385 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249397 | orchestrator | 2026-02-04 00:41:12.249408 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 00:41:12.249419 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.105) 0:00:34.727 **** 2026-02-04 00:41:12.249514 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249528 | orchestrator | 2026-02-04 00:41:12.249540 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 00:41:12.249551 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.110) 0:00:34.838 **** 2026-02-04 00:41:12.249562 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249573 | orchestrator | 2026-02-04 00:41:12.249584 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 00:41:12.249595 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.134) 0:00:34.972 **** 2026-02-04 00:41:12.249607 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:41:12.249639 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:41:12.249650 | orchestrator |  "sdb": { 2026-02-04 00:41:12.249682 | orchestrator |  "osd_lvm_uuid": "81b3d681-fa24-5b92-b5b8-11e84f5b22d9" 2026-02-04 00:41:12.249694 | orchestrator |  }, 2026-02-04 00:41:12.249705 | orchestrator |  "sdc": { 2026-02-04 00:41:12.249717 | orchestrator |  "osd_lvm_uuid": "5659fb6c-b6d6-5368-9f3c-0e525a1333df" 2026-02-04 00:41:12.249728 | orchestrator |  } 2026-02-04 00:41:12.249739 | orchestrator |  } 2026-02-04 00:41:12.249751 | orchestrator | } 2026-02-04 00:41:12.249762 | orchestrator | 2026-02-04 00:41:12.249773 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 00:41:12.249785 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.138) 0:00:35.111 **** 2026-02-04 00:41:12.249796 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249807 | orchestrator | 2026-02-04 00:41:12.249818 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 00:41:12.249829 | orchestrator | Wednesday 04 February 2026 00:41:10 +0000 (0:00:00.247) 0:00:35.358 **** 2026-02-04 00:41:12.249840 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249863 | orchestrator | 2026-02-04 00:41:12.249875 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 00:41:12.249886 | orchestrator | Wednesday 04 February 2026 00:41:11 +0000 (0:00:00.108) 0:00:35.467 **** 2026-02-04 00:41:12.249896 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:41:12.249907 | orchestrator | 2026-02-04 00:41:12.249918 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 00:41:12.249929 | orchestrator | Wednesday 04 February 2026 00:41:11 +0000 (0:00:00.134) 0:00:35.601 **** 2026-02-04 00:41:12.249940 | orchestrator | changed: [testbed-node-5] => { 2026-02-04 00:41:12.249952 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 00:41:12.249963 | orchestrator |  "ceph_osd_devices": { 2026-02-04 00:41:12.249974 | orchestrator |  "sdb": { 2026-02-04 00:41:12.249985 | orchestrator |  "osd_lvm_uuid": "81b3d681-fa24-5b92-b5b8-11e84f5b22d9" 2026-02-04 00:41:12.249996 | orchestrator |  }, 2026-02-04 00:41:12.250008 | orchestrator |  "sdc": { 2026-02-04 00:41:12.250088 | orchestrator |  "osd_lvm_uuid": "5659fb6c-b6d6-5368-9f3c-0e525a1333df" 2026-02-04 00:41:12.250100 | orchestrator |  } 2026-02-04 00:41:12.250111 | orchestrator |  }, 2026-02-04 00:41:12.250123 | orchestrator |  "lvm_volumes": [ 2026-02-04 00:41:12.250133 | orchestrator |  { 2026-02-04 00:41:12.250144 | orchestrator |  "data": "osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9", 2026-02-04 00:41:12.250155 | orchestrator |  "data_vg": "ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9" 2026-02-04 00:41:12.250166 | orchestrator |  }, 2026-02-04 00:41:12.250178 | orchestrator |  { 2026-02-04 00:41:12.250189 | orchestrator |  "data": "osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df", 2026-02-04 00:41:12.250207 | orchestrator |  "data_vg": "ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df" 2026-02-04 00:41:12.250219 | orchestrator |  } 2026-02-04 00:41:12.250230 | orchestrator |  ] 2026-02-04 00:41:12.250246 | orchestrator |  } 2026-02-04 00:41:12.250258 | orchestrator | } 2026-02-04 00:41:12.250269 | orchestrator | 2026-02-04 00:41:12.250280 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 00:41:12.250291 | orchestrator | Wednesday 04 February 2026 00:41:11 +0000 (0:00:00.199) 0:00:35.801 **** 2026-02-04 00:41:12.250302 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 00:41:12.250313 | orchestrator | 2026-02-04 00:41:12.250324 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:41:12.250335 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 00:41:12.250348 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 00:41:12.250359 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 00:41:12.250370 | orchestrator | 2026-02-04 00:41:12.250381 | orchestrator | 2026-02-04 00:41:12.250392 | orchestrator | 2026-02-04 00:41:12.250403 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:41:12.250414 | orchestrator | Wednesday 04 February 2026 00:41:12 +0000 (0:00:00.848) 0:00:36.650 **** 2026-02-04 00:41:12.250425 | orchestrator | =============================================================================== 2026-02-04 00:41:12.250436 | orchestrator | Write configuration file ------------------------------------------------ 3.66s 2026-02-04 00:41:12.250447 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-02-04 00:41:12.250457 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2026-02-04 00:41:12.250468 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.95s 2026-02-04 00:41:12.250486 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-02-04 00:41:12.250497 | orchestrator | Print configuration data ------------------------------------------------ 0.72s 2026-02-04 00:41:12.250508 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2026-02-04 00:41:12.250519 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.65s 2026-02-04 00:41:12.250530 | orchestrator | Add known links to the list of available block devices ------------------ 0.62s 2026-02-04 00:41:12.250541 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-04 00:41:12.250552 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.59s 2026-02-04 00:41:12.250562 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2026-02-04 00:41:12.250573 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-02-04 00:41:12.250593 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2026-02-04 00:41:12.473337 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2026-02-04 00:41:12.473463 | orchestrator | Add known partitions to the list of available block devices ------------- 0.52s 2026-02-04 00:41:12.473486 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2026-02-04 00:41:12.473505 | orchestrator | Add known partitions to the list of available block devices ------------- 0.51s 2026-02-04 00:41:12.473523 | orchestrator | Print WAL devices ------------------------------------------------------- 0.51s 2026-02-04 00:41:12.473540 | orchestrator | Set DB devices config data ---------------------------------------------- 0.51s 2026-02-04 00:41:34.722787 | orchestrator | 2026-02-04 00:41:34 | INFO  | Task c5540780-f72b-4381-be58-f6fff15f196b (sync inventory) is running in background. Output coming soon. 2026-02-04 00:41:57.344785 | orchestrator | 2026-02-04 00:41:36 | INFO  | Starting group_vars file reorganization 2026-02-04 00:41:57.344891 | orchestrator | 2026-02-04 00:41:36 | INFO  | Moved 0 file(s) to their respective directories 2026-02-04 00:41:57.344907 | orchestrator | 2026-02-04 00:41:36 | INFO  | Group_vars file reorganization completed 2026-02-04 00:41:57.344914 | orchestrator | 2026-02-04 00:41:38 | INFO  | Starting variable preparation from inventory 2026-02-04 00:41:57.344922 | orchestrator | 2026-02-04 00:41:41 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-04 00:41:57.344930 | orchestrator | 2026-02-04 00:41:41 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-04 00:41:57.344936 | orchestrator | 2026-02-04 00:41:41 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-04 00:41:57.344943 | orchestrator | 2026-02-04 00:41:41 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-04 00:41:57.344951 | orchestrator | 2026-02-04 00:41:41 | INFO  | Variable preparation completed 2026-02-04 00:41:57.344958 | orchestrator | 2026-02-04 00:41:42 | INFO  | Starting inventory overwrite handling 2026-02-04 00:41:57.344964 | orchestrator | 2026-02-04 00:41:42 | INFO  | Handling group overwrites in 99-overwrite 2026-02-04 00:41:57.344971 | orchestrator | 2026-02-04 00:41:42 | INFO  | Removing group frr:children from 60-generic 2026-02-04 00:41:57.344978 | orchestrator | 2026-02-04 00:41:42 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-04 00:41:57.345003 | orchestrator | 2026-02-04 00:41:42 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-04 00:41:57.345010 | orchestrator | 2026-02-04 00:41:42 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-04 00:41:57.345017 | orchestrator | 2026-02-04 00:41:42 | INFO  | Handling group overwrites in 20-roles 2026-02-04 00:41:57.345023 | orchestrator | 2026-02-04 00:41:42 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-04 00:41:57.345053 | orchestrator | 2026-02-04 00:41:42 | INFO  | Removed 5 group(s) in total 2026-02-04 00:41:57.345059 | orchestrator | 2026-02-04 00:41:42 | INFO  | Inventory overwrite handling completed 2026-02-04 00:41:57.345066 | orchestrator | 2026-02-04 00:41:43 | INFO  | Starting merge of inventory files 2026-02-04 00:41:57.345073 | orchestrator | 2026-02-04 00:41:43 | INFO  | Inventory files merged successfully 2026-02-04 00:41:57.345079 | orchestrator | 2026-02-04 00:41:47 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-04 00:41:57.345086 | orchestrator | 2026-02-04 00:41:56 | INFO  | Successfully wrote ClusterShell configuration 2026-02-04 00:41:57.345092 | orchestrator | [master 4da6203] 2026-02-04-00-41 2026-02-04 00:41:57.345100 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-04 00:41:59.322317 | orchestrator | 2026-02-04 00:41:59 | INFO  | Task 504257c7-de9e-4e3d-baf4-203b15cead0f (ceph-create-lvm-devices) was prepared for execution. 2026-02-04 00:41:59.322417 | orchestrator | 2026-02-04 00:41:59 | INFO  | It takes a moment until task 504257c7-de9e-4e3d-baf4-203b15cead0f (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-04 00:42:09.832421 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 00:42:09.832493 | orchestrator | 2.16.14 2026-02-04 00:42:09.832500 | orchestrator | 2026-02-04 00:42:09.832505 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 00:42:09.832510 | orchestrator | 2026-02-04 00:42:09.832514 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:42:09.832519 | orchestrator | Wednesday 04 February 2026 00:42:03 +0000 (0:00:00.294) 0:00:00.294 **** 2026-02-04 00:42:09.832524 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 00:42:09.832528 | orchestrator | 2026-02-04 00:42:09.832533 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:42:09.832537 | orchestrator | Wednesday 04 February 2026 00:42:03 +0000 (0:00:00.217) 0:00:00.511 **** 2026-02-04 00:42:09.832540 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:09.832545 | orchestrator | 2026-02-04 00:42:09.832549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832553 | orchestrator | Wednesday 04 February 2026 00:42:03 +0000 (0:00:00.198) 0:00:00.710 **** 2026-02-04 00:42:09.832557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:42:09.832561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:42:09.832565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:42:09.832569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:42:09.832572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:42:09.832576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:42:09.832580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:42:09.832584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:42:09.832588 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-04 00:42:09.832654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:42:09.832659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:42:09.832663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:42:09.832666 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:42:09.832686 | orchestrator | 2026-02-04 00:42:09.832691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832694 | orchestrator | Wednesday 04 February 2026 00:42:04 +0000 (0:00:00.415) 0:00:01.126 **** 2026-02-04 00:42:09.832698 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832702 | orchestrator | 2026-02-04 00:42:09.832706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832710 | orchestrator | Wednesday 04 February 2026 00:42:04 +0000 (0:00:00.176) 0:00:01.302 **** 2026-02-04 00:42:09.832714 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832718 | orchestrator | 2026-02-04 00:42:09.832722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832726 | orchestrator | Wednesday 04 February 2026 00:42:04 +0000 (0:00:00.178) 0:00:01.481 **** 2026-02-04 00:42:09.832730 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832734 | orchestrator | 2026-02-04 00:42:09.832738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832742 | orchestrator | Wednesday 04 February 2026 00:42:04 +0000 (0:00:00.179) 0:00:01.660 **** 2026-02-04 00:42:09.832746 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832749 | orchestrator | 2026-02-04 00:42:09.832754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832757 | orchestrator | Wednesday 04 February 2026 00:42:05 +0000 (0:00:00.183) 0:00:01.844 **** 2026-02-04 00:42:09.832761 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832765 | orchestrator | 2026-02-04 00:42:09.832769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832773 | orchestrator | Wednesday 04 February 2026 00:42:05 +0000 (0:00:00.181) 0:00:02.025 **** 2026-02-04 00:42:09.832777 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832781 | orchestrator | 2026-02-04 00:42:09.832784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832788 | orchestrator | Wednesday 04 February 2026 00:42:05 +0000 (0:00:00.179) 0:00:02.205 **** 2026-02-04 00:42:09.832792 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832796 | orchestrator | 2026-02-04 00:42:09.832800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832804 | orchestrator | Wednesday 04 February 2026 00:42:05 +0000 (0:00:00.179) 0:00:02.385 **** 2026-02-04 00:42:09.832808 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.832812 | orchestrator | 2026-02-04 00:42:09.832815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832819 | orchestrator | Wednesday 04 February 2026 00:42:05 +0000 (0:00:00.180) 0:00:02.566 **** 2026-02-04 00:42:09.832823 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016) 2026-02-04 00:42:09.832829 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016) 2026-02-04 00:42:09.832833 | orchestrator | 2026-02-04 00:42:09.832837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832850 | orchestrator | Wednesday 04 February 2026 00:42:06 +0000 (0:00:00.365) 0:00:02.931 **** 2026-02-04 00:42:09.832854 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8) 2026-02-04 00:42:09.832858 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8) 2026-02-04 00:42:09.832862 | orchestrator | 2026-02-04 00:42:09.832866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832869 | orchestrator | Wednesday 04 February 2026 00:42:06 +0000 (0:00:00.544) 0:00:03.475 **** 2026-02-04 00:42:09.832873 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4) 2026-02-04 00:42:09.832877 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4) 2026-02-04 00:42:09.832885 | orchestrator | 2026-02-04 00:42:09.832888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832892 | orchestrator | Wednesday 04 February 2026 00:42:07 +0000 (0:00:00.543) 0:00:04.018 **** 2026-02-04 00:42:09.832896 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81) 2026-02-04 00:42:09.832900 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81) 2026-02-04 00:42:09.832904 | orchestrator | 2026-02-04 00:42:09.832908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:09.832912 | orchestrator | Wednesday 04 February 2026 00:42:07 +0000 (0:00:00.676) 0:00:04.695 **** 2026-02-04 00:42:09.832916 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:42:09.832920 | orchestrator | 2026-02-04 00:42:09.832924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.832928 | orchestrator | Wednesday 04 February 2026 00:42:08 +0000 (0:00:00.293) 0:00:04.988 **** 2026-02-04 00:42:09.832931 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-04 00:42:09.832935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-04 00:42:09.832939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-04 00:42:09.832943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-04 00:42:09.832947 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-04 00:42:09.832951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-04 00:42:09.832954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-04 00:42:09.832958 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-04 00:42:09.832962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-04 00:42:09.832966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-04 00:42:09.832970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-04 00:42:09.832988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-04 00:42:09.832992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-04 00:42:09.832996 | orchestrator | 2026-02-04 00:42:09.833000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833005 | orchestrator | Wednesday 04 February 2026 00:42:08 +0000 (0:00:00.374) 0:00:05.363 **** 2026-02-04 00:42:09.833009 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833014 | orchestrator | 2026-02-04 00:42:09.833018 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833023 | orchestrator | Wednesday 04 February 2026 00:42:08 +0000 (0:00:00.173) 0:00:05.537 **** 2026-02-04 00:42:09.833027 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833032 | orchestrator | 2026-02-04 00:42:09.833036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833041 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.194) 0:00:05.731 **** 2026-02-04 00:42:09.833045 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833050 | orchestrator | 2026-02-04 00:42:09.833054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833059 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.164) 0:00:05.896 **** 2026-02-04 00:42:09.833063 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833070 | orchestrator | 2026-02-04 00:42:09.833075 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833080 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.156) 0:00:06.052 **** 2026-02-04 00:42:09.833084 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833089 | orchestrator | 2026-02-04 00:42:09.833093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833098 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.172) 0:00:06.225 **** 2026-02-04 00:42:09.833102 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833107 | orchestrator | 2026-02-04 00:42:09.833111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:09.833116 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.159) 0:00:06.384 **** 2026-02-04 00:42:09.833120 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:09.833125 | orchestrator | 2026-02-04 00:42:09.833132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:16.864908 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.173) 0:00:06.557 **** 2026-02-04 00:42:16.865017 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865035 | orchestrator | 2026-02-04 00:42:16.865048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:16.865061 | orchestrator | Wednesday 04 February 2026 00:42:09 +0000 (0:00:00.169) 0:00:06.727 **** 2026-02-04 00:42:16.865072 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-04 00:42:16.865085 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-04 00:42:16.865097 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-04 00:42:16.865109 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-04 00:42:16.865120 | orchestrator | 2026-02-04 00:42:16.865131 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:16.865143 | orchestrator | Wednesday 04 February 2026 00:42:10 +0000 (0:00:00.842) 0:00:07.569 **** 2026-02-04 00:42:16.865155 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865166 | orchestrator | 2026-02-04 00:42:16.865177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:16.865189 | orchestrator | Wednesday 04 February 2026 00:42:11 +0000 (0:00:00.164) 0:00:07.734 **** 2026-02-04 00:42:16.865200 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865211 | orchestrator | 2026-02-04 00:42:16.865223 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:16.865234 | orchestrator | Wednesday 04 February 2026 00:42:11 +0000 (0:00:00.162) 0:00:07.896 **** 2026-02-04 00:42:16.865246 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865258 | orchestrator | 2026-02-04 00:42:16.865269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:16.865280 | orchestrator | Wednesday 04 February 2026 00:42:11 +0000 (0:00:00.174) 0:00:08.070 **** 2026-02-04 00:42:16.865291 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865302 | orchestrator | 2026-02-04 00:42:16.865313 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 00:42:16.865324 | orchestrator | Wednesday 04 February 2026 00:42:11 +0000 (0:00:00.160) 0:00:08.230 **** 2026-02-04 00:42:16.865336 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865347 | orchestrator | 2026-02-04 00:42:16.865358 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 00:42:16.865369 | orchestrator | Wednesday 04 February 2026 00:42:11 +0000 (0:00:00.100) 0:00:08.330 **** 2026-02-04 00:42:16.865381 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '29c6bc8c-f904-55ca-809f-6429b65a49e4'}}) 2026-02-04 00:42:16.865393 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1b7fb365-e96c-53e1-a018-1a0a8a845031'}}) 2026-02-04 00:42:16.865404 | orchestrator | 2026-02-04 00:42:16.865416 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 00:42:16.865452 | orchestrator | Wednesday 04 February 2026 00:42:11 +0000 (0:00:00.135) 0:00:08.466 **** 2026-02-04 00:42:16.865466 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'}) 2026-02-04 00:42:16.865480 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'}) 2026-02-04 00:42:16.865493 | orchestrator | 2026-02-04 00:42:16.865507 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 00:42:16.865534 | orchestrator | Wednesday 04 February 2026 00:42:13 +0000 (0:00:01.918) 0:00:10.385 **** 2026-02-04 00:42:16.865548 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.865563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.865576 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865616 | orchestrator | 2026-02-04 00:42:16.865631 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 00:42:16.865644 | orchestrator | Wednesday 04 February 2026 00:42:13 +0000 (0:00:00.155) 0:00:10.540 **** 2026-02-04 00:42:16.865657 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'}) 2026-02-04 00:42:16.865670 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'}) 2026-02-04 00:42:16.865683 | orchestrator | 2026-02-04 00:42:16.865695 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 00:42:16.865708 | orchestrator | Wednesday 04 February 2026 00:42:15 +0000 (0:00:01.420) 0:00:11.961 **** 2026-02-04 00:42:16.865722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.865734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.865748 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865761 | orchestrator | 2026-02-04 00:42:16.865774 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 00:42:16.865786 | orchestrator | Wednesday 04 February 2026 00:42:15 +0000 (0:00:00.139) 0:00:12.101 **** 2026-02-04 00:42:16.865817 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865831 | orchestrator | 2026-02-04 00:42:16.865844 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 00:42:16.865856 | orchestrator | Wednesday 04 February 2026 00:42:15 +0000 (0:00:00.136) 0:00:12.238 **** 2026-02-04 00:42:16.865867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.865878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.865889 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865900 | orchestrator | 2026-02-04 00:42:16.865911 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 00:42:16.865922 | orchestrator | Wednesday 04 February 2026 00:42:15 +0000 (0:00:00.259) 0:00:12.498 **** 2026-02-04 00:42:16.865933 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.865944 | orchestrator | 2026-02-04 00:42:16.865955 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 00:42:16.865966 | orchestrator | Wednesday 04 February 2026 00:42:15 +0000 (0:00:00.131) 0:00:12.630 **** 2026-02-04 00:42:16.865985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.865996 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.866008 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866073 | orchestrator | 2026-02-04 00:42:16.866086 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 00:42:16.866097 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.129) 0:00:12.759 **** 2026-02-04 00:42:16.866108 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866119 | orchestrator | 2026-02-04 00:42:16.866130 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 00:42:16.866142 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.097) 0:00:12.856 **** 2026-02-04 00:42:16.866153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.866164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.866175 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866186 | orchestrator | 2026-02-04 00:42:16.866197 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 00:42:16.866208 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.129) 0:00:12.986 **** 2026-02-04 00:42:16.866219 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:16.866230 | orchestrator | 2026-02-04 00:42:16.866242 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 00:42:16.866253 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.108) 0:00:13.094 **** 2026-02-04 00:42:16.866264 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.866275 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.866287 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866298 | orchestrator | 2026-02-04 00:42:16.866309 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 00:42:16.866320 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.141) 0:00:13.236 **** 2026-02-04 00:42:16.866331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.866350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.866362 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866373 | orchestrator | 2026-02-04 00:42:16.866384 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 00:42:16.866396 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.109) 0:00:13.345 **** 2026-02-04 00:42:16.866407 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:16.866419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:16.866430 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866441 | orchestrator | 2026-02-04 00:42:16.866452 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 00:42:16.866463 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.124) 0:00:13.470 **** 2026-02-04 00:42:16.866485 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:16.866497 | orchestrator | 2026-02-04 00:42:16.866508 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 00:42:16.866526 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.120) 0:00:13.591 **** 2026-02-04 00:42:22.521427 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.521537 | orchestrator | 2026-02-04 00:42:22.521556 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 00:42:22.521569 | orchestrator | Wednesday 04 February 2026 00:42:16 +0000 (0:00:00.119) 0:00:13.710 **** 2026-02-04 00:42:22.521580 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.521675 | orchestrator | 2026-02-04 00:42:22.521695 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 00:42:22.521707 | orchestrator | Wednesday 04 February 2026 00:42:17 +0000 (0:00:00.127) 0:00:13.838 **** 2026-02-04 00:42:22.521718 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:42:22.521731 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 00:42:22.521742 | orchestrator | } 2026-02-04 00:42:22.521754 | orchestrator | 2026-02-04 00:42:22.521765 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 00:42:22.521777 | orchestrator | Wednesday 04 February 2026 00:42:17 +0000 (0:00:00.239) 0:00:14.078 **** 2026-02-04 00:42:22.521789 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:42:22.521800 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 00:42:22.521812 | orchestrator | } 2026-02-04 00:42:22.521823 | orchestrator | 2026-02-04 00:42:22.521834 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 00:42:22.521845 | orchestrator | Wednesday 04 February 2026 00:42:17 +0000 (0:00:00.137) 0:00:14.215 **** 2026-02-04 00:42:22.521856 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:42:22.521869 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 00:42:22.521881 | orchestrator | } 2026-02-04 00:42:22.521892 | orchestrator | 2026-02-04 00:42:22.521903 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 00:42:22.521915 | orchestrator | Wednesday 04 February 2026 00:42:17 +0000 (0:00:00.124) 0:00:14.340 **** 2026-02-04 00:42:22.521926 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:22.521937 | orchestrator | 2026-02-04 00:42:22.521949 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 00:42:22.521960 | orchestrator | Wednesday 04 February 2026 00:42:18 +0000 (0:00:00.617) 0:00:14.958 **** 2026-02-04 00:42:22.521974 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:22.521988 | orchestrator | 2026-02-04 00:42:22.522001 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 00:42:22.522062 | orchestrator | Wednesday 04 February 2026 00:42:18 +0000 (0:00:00.498) 0:00:15.457 **** 2026-02-04 00:42:22.522076 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:22.522089 | orchestrator | 2026-02-04 00:42:22.522101 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 00:42:22.522115 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.488) 0:00:15.946 **** 2026-02-04 00:42:22.522127 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:22.522139 | orchestrator | 2026-02-04 00:42:22.522153 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 00:42:22.522166 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.137) 0:00:16.083 **** 2026-02-04 00:42:22.522180 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522193 | orchestrator | 2026-02-04 00:42:22.522204 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 00:42:22.522215 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.105) 0:00:16.189 **** 2026-02-04 00:42:22.522227 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522238 | orchestrator | 2026-02-04 00:42:22.522248 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 00:42:22.522288 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.082) 0:00:16.272 **** 2026-02-04 00:42:22.522314 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:42:22.522326 | orchestrator |  "vgs_report": { 2026-02-04 00:42:22.522337 | orchestrator |  "vg": [] 2026-02-04 00:42:22.522348 | orchestrator |  } 2026-02-04 00:42:22.522360 | orchestrator | } 2026-02-04 00:42:22.522370 | orchestrator | 2026-02-04 00:42:22.522382 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 00:42:22.522392 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.127) 0:00:16.399 **** 2026-02-04 00:42:22.522403 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522414 | orchestrator | 2026-02-04 00:42:22.522425 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 00:42:22.522436 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.124) 0:00:16.523 **** 2026-02-04 00:42:22.522447 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522458 | orchestrator | 2026-02-04 00:42:22.522468 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 00:42:22.522479 | orchestrator | Wednesday 04 February 2026 00:42:19 +0000 (0:00:00.135) 0:00:16.658 **** 2026-02-04 00:42:22.522490 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522500 | orchestrator | 2026-02-04 00:42:22.522511 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 00:42:22.522522 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.229) 0:00:16.888 **** 2026-02-04 00:42:22.522533 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522544 | orchestrator | 2026-02-04 00:42:22.522554 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 00:42:22.522565 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.137) 0:00:17.026 **** 2026-02-04 00:42:22.522576 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522610 | orchestrator | 2026-02-04 00:42:22.522623 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 00:42:22.522634 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.114) 0:00:17.140 **** 2026-02-04 00:42:22.522645 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522656 | orchestrator | 2026-02-04 00:42:22.522667 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 00:42:22.522678 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.145) 0:00:17.285 **** 2026-02-04 00:42:22.522689 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522700 | orchestrator | 2026-02-04 00:42:22.522711 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 00:42:22.522722 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.135) 0:00:17.421 **** 2026-02-04 00:42:22.522752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522763 | orchestrator | 2026-02-04 00:42:22.522775 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 00:42:22.522786 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.123) 0:00:17.545 **** 2026-02-04 00:42:22.522797 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522808 | orchestrator | 2026-02-04 00:42:22.522818 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 00:42:22.522829 | orchestrator | Wednesday 04 February 2026 00:42:20 +0000 (0:00:00.123) 0:00:17.668 **** 2026-02-04 00:42:22.522840 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522851 | orchestrator | 2026-02-04 00:42:22.522863 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 00:42:22.522874 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.133) 0:00:17.802 **** 2026-02-04 00:42:22.522884 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522895 | orchestrator | 2026-02-04 00:42:22.522906 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 00:42:22.522917 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.123) 0:00:17.925 **** 2026-02-04 00:42:22.522938 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522949 | orchestrator | 2026-02-04 00:42:22.522960 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 00:42:22.522971 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.132) 0:00:18.058 **** 2026-02-04 00:42:22.522982 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.522992 | orchestrator | 2026-02-04 00:42:22.523003 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 00:42:22.523015 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.126) 0:00:18.184 **** 2026-02-04 00:42:22.523026 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.523037 | orchestrator | 2026-02-04 00:42:22.523048 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 00:42:22.523059 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.121) 0:00:18.305 **** 2026-02-04 00:42:22.523071 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:22.523084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:22.523095 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.523106 | orchestrator | 2026-02-04 00:42:22.523117 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 00:42:22.523128 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.269) 0:00:18.575 **** 2026-02-04 00:42:22.523139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:22.523151 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:22.523162 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.523173 | orchestrator | 2026-02-04 00:42:22.523184 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 00:42:22.523195 | orchestrator | Wednesday 04 February 2026 00:42:21 +0000 (0:00:00.129) 0:00:18.704 **** 2026-02-04 00:42:22.523206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:22.523217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:22.523229 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.523240 | orchestrator | 2026-02-04 00:42:22.523251 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 00:42:22.523262 | orchestrator | Wednesday 04 February 2026 00:42:22 +0000 (0:00:00.130) 0:00:18.835 **** 2026-02-04 00:42:22.523273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:22.523284 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:22.523295 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.523306 | orchestrator | 2026-02-04 00:42:22.523317 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 00:42:22.523328 | orchestrator | Wednesday 04 February 2026 00:42:22 +0000 (0:00:00.132) 0:00:18.968 **** 2026-02-04 00:42:22.523339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:22.523350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:22.523369 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:22.523380 | orchestrator | 2026-02-04 00:42:22.523391 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 00:42:22.523402 | orchestrator | Wednesday 04 February 2026 00:42:22 +0000 (0:00:00.139) 0:00:19.107 **** 2026-02-04 00:42:22.523419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:27.761112 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:27.761230 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:27.761259 | orchestrator | 2026-02-04 00:42:27.761274 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 00:42:27.761288 | orchestrator | Wednesday 04 February 2026 00:42:22 +0000 (0:00:00.142) 0:00:19.250 **** 2026-02-04 00:42:27.761303 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:27.761323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:27.761341 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:27.761359 | orchestrator | 2026-02-04 00:42:27.761401 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 00:42:27.761424 | orchestrator | Wednesday 04 February 2026 00:42:22 +0000 (0:00:00.140) 0:00:19.390 **** 2026-02-04 00:42:27.761438 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:27.761450 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:27.761461 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:27.761472 | orchestrator | 2026-02-04 00:42:27.761483 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 00:42:27.761494 | orchestrator | Wednesday 04 February 2026 00:42:22 +0000 (0:00:00.136) 0:00:19.527 **** 2026-02-04 00:42:27.761505 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:27.761517 | orchestrator | 2026-02-04 00:42:27.761528 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 00:42:27.761540 | orchestrator | Wednesday 04 February 2026 00:42:23 +0000 (0:00:00.474) 0:00:20.001 **** 2026-02-04 00:42:27.761551 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:27.761562 | orchestrator | 2026-02-04 00:42:27.761573 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 00:42:27.761612 | orchestrator | Wednesday 04 February 2026 00:42:23 +0000 (0:00:00.517) 0:00:20.519 **** 2026-02-04 00:42:27.761626 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:42:27.761637 | orchestrator | 2026-02-04 00:42:27.761648 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 00:42:27.761659 | orchestrator | Wednesday 04 February 2026 00:42:23 +0000 (0:00:00.160) 0:00:20.679 **** 2026-02-04 00:42:27.761670 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'vg_name': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'}) 2026-02-04 00:42:27.761692 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'vg_name': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'}) 2026-02-04 00:42:27.761711 | orchestrator | 2026-02-04 00:42:27.761730 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 00:42:27.761750 | orchestrator | Wednesday 04 February 2026 00:42:24 +0000 (0:00:00.186) 0:00:20.866 **** 2026-02-04 00:42:27.761768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:27.761819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:27.761839 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:27.761851 | orchestrator | 2026-02-04 00:42:27.761862 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 00:42:27.761873 | orchestrator | Wednesday 04 February 2026 00:42:24 +0000 (0:00:00.346) 0:00:21.212 **** 2026-02-04 00:42:27.761884 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:27.761895 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:27.761906 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:27.761917 | orchestrator | 2026-02-04 00:42:27.761929 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 00:42:27.761940 | orchestrator | Wednesday 04 February 2026 00:42:24 +0000 (0:00:00.174) 0:00:21.386 **** 2026-02-04 00:42:27.761951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'})  2026-02-04 00:42:27.761963 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'})  2026-02-04 00:42:27.761974 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:42:27.761984 | orchestrator | 2026-02-04 00:42:27.761995 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 00:42:27.762007 | orchestrator | Wednesday 04 February 2026 00:42:24 +0000 (0:00:00.174) 0:00:21.561 **** 2026-02-04 00:42:27.762107 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 00:42:27.762131 | orchestrator |  "lvm_report": { 2026-02-04 00:42:27.762149 | orchestrator |  "lv": [ 2026-02-04 00:42:27.762168 | orchestrator |  { 2026-02-04 00:42:27.762186 | orchestrator |  "lv_name": "osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031", 2026-02-04 00:42:27.762206 | orchestrator |  "vg_name": "ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031" 2026-02-04 00:42:27.762296 | orchestrator |  }, 2026-02-04 00:42:27.762321 | orchestrator |  { 2026-02-04 00:42:27.762335 | orchestrator |  "lv_name": "osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4", 2026-02-04 00:42:27.762347 | orchestrator |  "vg_name": "ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4" 2026-02-04 00:42:27.762358 | orchestrator |  } 2026-02-04 00:42:27.762369 | orchestrator |  ], 2026-02-04 00:42:27.762380 | orchestrator |  "pv": [ 2026-02-04 00:42:27.762390 | orchestrator |  { 2026-02-04 00:42:27.762401 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 00:42:27.762412 | orchestrator |  "vg_name": "ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4" 2026-02-04 00:42:27.762427 | orchestrator |  }, 2026-02-04 00:42:27.762446 | orchestrator |  { 2026-02-04 00:42:27.762464 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 00:42:27.762482 | orchestrator |  "vg_name": "ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031" 2026-02-04 00:42:27.762501 | orchestrator |  } 2026-02-04 00:42:27.762519 | orchestrator |  ] 2026-02-04 00:42:27.762537 | orchestrator |  } 2026-02-04 00:42:27.762557 | orchestrator | } 2026-02-04 00:42:27.762576 | orchestrator | 2026-02-04 00:42:27.762622 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 00:42:27.762641 | orchestrator | 2026-02-04 00:42:27.762658 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:42:27.762678 | orchestrator | Wednesday 04 February 2026 00:42:25 +0000 (0:00:00.291) 0:00:21.853 **** 2026-02-04 00:42:27.762706 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 00:42:27.762717 | orchestrator | 2026-02-04 00:42:27.762728 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:42:27.762740 | orchestrator | Wednesday 04 February 2026 00:42:25 +0000 (0:00:00.308) 0:00:22.162 **** 2026-02-04 00:42:27.762751 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:27.762762 | orchestrator | 2026-02-04 00:42:27.762773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.762784 | orchestrator | Wednesday 04 February 2026 00:42:25 +0000 (0:00:00.246) 0:00:22.408 **** 2026-02-04 00:42:27.762796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:42:27.762815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:42:27.762833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:42:27.762851 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:42:27.762870 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:42:27.762888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:42:27.762908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:42:27.762928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:42:27.762940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-04 00:42:27.762951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:42:27.762962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:42:27.762973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:42:27.762984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:42:27.762995 | orchestrator | 2026-02-04 00:42:27.763006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.763016 | orchestrator | Wednesday 04 February 2026 00:42:26 +0000 (0:00:00.384) 0:00:22.793 **** 2026-02-04 00:42:27.763027 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:27.763038 | orchestrator | 2026-02-04 00:42:27.763049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.763060 | orchestrator | Wednesday 04 February 2026 00:42:26 +0000 (0:00:00.208) 0:00:23.001 **** 2026-02-04 00:42:27.763071 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:27.763082 | orchestrator | 2026-02-04 00:42:27.763093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.763103 | orchestrator | Wednesday 04 February 2026 00:42:26 +0000 (0:00:00.210) 0:00:23.212 **** 2026-02-04 00:42:27.763115 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:27.763134 | orchestrator | 2026-02-04 00:42:27.763146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.763157 | orchestrator | Wednesday 04 February 2026 00:42:27 +0000 (0:00:00.617) 0:00:23.829 **** 2026-02-04 00:42:27.763171 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:27.763190 | orchestrator | 2026-02-04 00:42:27.763208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.763227 | orchestrator | Wednesday 04 February 2026 00:42:27 +0000 (0:00:00.218) 0:00:24.048 **** 2026-02-04 00:42:27.763245 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:27.763263 | orchestrator | 2026-02-04 00:42:27.763283 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:27.763302 | orchestrator | Wednesday 04 February 2026 00:42:27 +0000 (0:00:00.214) 0:00:24.263 **** 2026-02-04 00:42:27.763331 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:27.763342 | orchestrator | 2026-02-04 00:42:27.763367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452170 | orchestrator | Wednesday 04 February 2026 00:42:27 +0000 (0:00:00.214) 0:00:24.478 **** 2026-02-04 00:42:39.452288 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.452311 | orchestrator | 2026-02-04 00:42:39.452325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452338 | orchestrator | Wednesday 04 February 2026 00:42:27 +0000 (0:00:00.222) 0:00:24.701 **** 2026-02-04 00:42:39.452350 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.452362 | orchestrator | 2026-02-04 00:42:39.452375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452388 | orchestrator | Wednesday 04 February 2026 00:42:28 +0000 (0:00:00.290) 0:00:24.991 **** 2026-02-04 00:42:39.452400 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004) 2026-02-04 00:42:39.452414 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004) 2026-02-04 00:42:39.452427 | orchestrator | 2026-02-04 00:42:39.452440 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452453 | orchestrator | Wednesday 04 February 2026 00:42:28 +0000 (0:00:00.533) 0:00:25.525 **** 2026-02-04 00:42:39.452465 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74) 2026-02-04 00:42:39.452478 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74) 2026-02-04 00:42:39.452491 | orchestrator | 2026-02-04 00:42:39.452503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452516 | orchestrator | Wednesday 04 February 2026 00:42:29 +0000 (0:00:00.414) 0:00:25.940 **** 2026-02-04 00:42:39.452526 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a) 2026-02-04 00:42:39.452534 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a) 2026-02-04 00:42:39.452541 | orchestrator | 2026-02-04 00:42:39.452549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452556 | orchestrator | Wednesday 04 February 2026 00:42:29 +0000 (0:00:00.468) 0:00:26.408 **** 2026-02-04 00:42:39.452564 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde) 2026-02-04 00:42:39.452571 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde) 2026-02-04 00:42:39.452579 | orchestrator | 2026-02-04 00:42:39.452643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:39.452651 | orchestrator | Wednesday 04 February 2026 00:42:30 +0000 (0:00:00.902) 0:00:27.310 **** 2026-02-04 00:42:39.452659 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:42:39.452668 | orchestrator | 2026-02-04 00:42:39.452677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.452686 | orchestrator | Wednesday 04 February 2026 00:42:31 +0000 (0:00:00.559) 0:00:27.870 **** 2026-02-04 00:42:39.452695 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-04 00:42:39.452704 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-04 00:42:39.452713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-04 00:42:39.452722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-04 00:42:39.452731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-04 00:42:39.452740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-04 00:42:39.452772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-04 00:42:39.452781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-04 00:42:39.452790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-04 00:42:39.452798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-04 00:42:39.452807 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-04 00:42:39.452816 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-04 00:42:39.452823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-04 00:42:39.452831 | orchestrator | 2026-02-04 00:42:39.452838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.452846 | orchestrator | Wednesday 04 February 2026 00:42:31 +0000 (0:00:00.838) 0:00:28.708 **** 2026-02-04 00:42:39.452853 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.452860 | orchestrator | 2026-02-04 00:42:39.452868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.452891 | orchestrator | Wednesday 04 February 2026 00:42:32 +0000 (0:00:00.232) 0:00:28.941 **** 2026-02-04 00:42:39.452899 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.452906 | orchestrator | 2026-02-04 00:42:39.452914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.452921 | orchestrator | Wednesday 04 February 2026 00:42:32 +0000 (0:00:00.222) 0:00:29.163 **** 2026-02-04 00:42:39.452933 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.452945 | orchestrator | 2026-02-04 00:42:39.452981 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.452995 | orchestrator | Wednesday 04 February 2026 00:42:32 +0000 (0:00:00.219) 0:00:29.383 **** 2026-02-04 00:42:39.453008 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453020 | orchestrator | 2026-02-04 00:42:39.453031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453039 | orchestrator | Wednesday 04 February 2026 00:42:32 +0000 (0:00:00.199) 0:00:29.583 **** 2026-02-04 00:42:39.453046 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453054 | orchestrator | 2026-02-04 00:42:39.453061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453069 | orchestrator | Wednesday 04 February 2026 00:42:33 +0000 (0:00:00.195) 0:00:29.778 **** 2026-02-04 00:42:39.453076 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453083 | orchestrator | 2026-02-04 00:42:39.453092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453104 | orchestrator | Wednesday 04 February 2026 00:42:33 +0000 (0:00:00.202) 0:00:29.981 **** 2026-02-04 00:42:39.453140 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453152 | orchestrator | 2026-02-04 00:42:39.453163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453174 | orchestrator | Wednesday 04 February 2026 00:42:33 +0000 (0:00:00.214) 0:00:30.196 **** 2026-02-04 00:42:39.453186 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453198 | orchestrator | 2026-02-04 00:42:39.453209 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453221 | orchestrator | Wednesday 04 February 2026 00:42:33 +0000 (0:00:00.201) 0:00:30.397 **** 2026-02-04 00:42:39.453233 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-04 00:42:39.453246 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-04 00:42:39.453259 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-04 00:42:39.453272 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-04 00:42:39.453285 | orchestrator | 2026-02-04 00:42:39.453296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453318 | orchestrator | Wednesday 04 February 2026 00:42:34 +0000 (0:00:00.843) 0:00:31.241 **** 2026-02-04 00:42:39.453325 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453333 | orchestrator | 2026-02-04 00:42:39.453340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453348 | orchestrator | Wednesday 04 February 2026 00:42:34 +0000 (0:00:00.195) 0:00:31.436 **** 2026-02-04 00:42:39.453355 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453362 | orchestrator | 2026-02-04 00:42:39.453370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453377 | orchestrator | Wednesday 04 February 2026 00:42:35 +0000 (0:00:00.646) 0:00:32.082 **** 2026-02-04 00:42:39.453385 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453392 | orchestrator | 2026-02-04 00:42:39.453399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:39.453407 | orchestrator | Wednesday 04 February 2026 00:42:35 +0000 (0:00:00.218) 0:00:32.301 **** 2026-02-04 00:42:39.453414 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453421 | orchestrator | 2026-02-04 00:42:39.453428 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 00:42:39.453441 | orchestrator | Wednesday 04 February 2026 00:42:35 +0000 (0:00:00.200) 0:00:32.501 **** 2026-02-04 00:42:39.453449 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453456 | orchestrator | 2026-02-04 00:42:39.453464 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 00:42:39.453471 | orchestrator | Wednesday 04 February 2026 00:42:35 +0000 (0:00:00.131) 0:00:32.632 **** 2026-02-04 00:42:39.453479 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}}) 2026-02-04 00:42:39.453487 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}}) 2026-02-04 00:42:39.453494 | orchestrator | 2026-02-04 00:42:39.453501 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 00:42:39.453509 | orchestrator | Wednesday 04 February 2026 00:42:36 +0000 (0:00:00.182) 0:00:32.815 **** 2026-02-04 00:42:39.453517 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}) 2026-02-04 00:42:39.453526 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}) 2026-02-04 00:42:39.453534 | orchestrator | 2026-02-04 00:42:39.453541 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 00:42:39.453548 | orchestrator | Wednesday 04 February 2026 00:42:37 +0000 (0:00:01.856) 0:00:34.671 **** 2026-02-04 00:42:39.453556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:39.453564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:39.453572 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:39.453606 | orchestrator | 2026-02-04 00:42:39.453614 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 00:42:39.453622 | orchestrator | Wednesday 04 February 2026 00:42:38 +0000 (0:00:00.138) 0:00:34.810 **** 2026-02-04 00:42:39.453629 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}) 2026-02-04 00:42:39.453646 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}) 2026-02-04 00:42:44.416482 | orchestrator | 2026-02-04 00:42:44.416569 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 00:42:44.416657 | orchestrator | Wednesday 04 February 2026 00:42:39 +0000 (0:00:01.367) 0:00:36.178 **** 2026-02-04 00:42:44.416669 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.416680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.416689 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416699 | orchestrator | 2026-02-04 00:42:44.416708 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 00:42:44.416717 | orchestrator | Wednesday 04 February 2026 00:42:39 +0000 (0:00:00.119) 0:00:36.297 **** 2026-02-04 00:42:44.416726 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416735 | orchestrator | 2026-02-04 00:42:44.416744 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 00:42:44.416753 | orchestrator | Wednesday 04 February 2026 00:42:39 +0000 (0:00:00.122) 0:00:36.420 **** 2026-02-04 00:42:44.416762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.416771 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.416780 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416789 | orchestrator | 2026-02-04 00:42:44.416798 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 00:42:44.416807 | orchestrator | Wednesday 04 February 2026 00:42:39 +0000 (0:00:00.135) 0:00:36.555 **** 2026-02-04 00:42:44.416816 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416824 | orchestrator | 2026-02-04 00:42:44.416833 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 00:42:44.416842 | orchestrator | Wednesday 04 February 2026 00:42:39 +0000 (0:00:00.111) 0:00:36.666 **** 2026-02-04 00:42:44.416851 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.416860 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.416869 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416878 | orchestrator | 2026-02-04 00:42:44.416886 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 00:42:44.416909 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:00.284) 0:00:36.951 **** 2026-02-04 00:42:44.416918 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416927 | orchestrator | 2026-02-04 00:42:44.416936 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 00:42:44.416945 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:00.116) 0:00:37.067 **** 2026-02-04 00:42:44.416953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.416962 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.416971 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.416980 | orchestrator | 2026-02-04 00:42:44.416988 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 00:42:44.416997 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:00.136) 0:00:37.204 **** 2026-02-04 00:42:44.417006 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:44.417015 | orchestrator | 2026-02-04 00:42:44.417025 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 00:42:44.417042 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:00.134) 0:00:37.338 **** 2026-02-04 00:42:44.417052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.417063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.417073 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417083 | orchestrator | 2026-02-04 00:42:44.417093 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 00:42:44.417104 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:00.136) 0:00:37.474 **** 2026-02-04 00:42:44.417113 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.417124 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.417134 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417144 | orchestrator | 2026-02-04 00:42:44.417155 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 00:42:44.417180 | orchestrator | Wednesday 04 February 2026 00:42:40 +0000 (0:00:00.159) 0:00:37.634 **** 2026-02-04 00:42:44.417191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:44.417201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:44.417211 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417222 | orchestrator | 2026-02-04 00:42:44.417232 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 00:42:44.417243 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.132) 0:00:37.767 **** 2026-02-04 00:42:44.417253 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417263 | orchestrator | 2026-02-04 00:42:44.417274 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 00:42:44.417285 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.092) 0:00:37.860 **** 2026-02-04 00:42:44.417296 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417306 | orchestrator | 2026-02-04 00:42:44.417316 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 00:42:44.417326 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.120) 0:00:37.980 **** 2026-02-04 00:42:44.417336 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417346 | orchestrator | 2026-02-04 00:42:44.417357 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 00:42:44.417367 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.130) 0:00:38.111 **** 2026-02-04 00:42:44.417378 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:42:44.417388 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 00:42:44.417397 | orchestrator | } 2026-02-04 00:42:44.417406 | orchestrator | 2026-02-04 00:42:44.417415 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 00:42:44.417423 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.128) 0:00:38.239 **** 2026-02-04 00:42:44.417432 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:42:44.417441 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 00:42:44.417450 | orchestrator | } 2026-02-04 00:42:44.417458 | orchestrator | 2026-02-04 00:42:44.417467 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 00:42:44.417476 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.126) 0:00:38.365 **** 2026-02-04 00:42:44.417490 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:42:44.417499 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 00:42:44.417508 | orchestrator | } 2026-02-04 00:42:44.417517 | orchestrator | 2026-02-04 00:42:44.417525 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 00:42:44.417534 | orchestrator | Wednesday 04 February 2026 00:42:41 +0000 (0:00:00.280) 0:00:38.646 **** 2026-02-04 00:42:44.417543 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:44.417552 | orchestrator | 2026-02-04 00:42:44.417560 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 00:42:44.417569 | orchestrator | Wednesday 04 February 2026 00:42:42 +0000 (0:00:00.505) 0:00:39.151 **** 2026-02-04 00:42:44.417578 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:44.417606 | orchestrator | 2026-02-04 00:42:44.417615 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 00:42:44.417623 | orchestrator | Wednesday 04 February 2026 00:42:42 +0000 (0:00:00.508) 0:00:39.660 **** 2026-02-04 00:42:44.417632 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:44.417641 | orchestrator | 2026-02-04 00:42:44.417649 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 00:42:44.417658 | orchestrator | Wednesday 04 February 2026 00:42:43 +0000 (0:00:00.511) 0:00:40.171 **** 2026-02-04 00:42:44.417667 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:44.417675 | orchestrator | 2026-02-04 00:42:44.417684 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 00:42:44.417692 | orchestrator | Wednesday 04 February 2026 00:42:43 +0000 (0:00:00.114) 0:00:40.286 **** 2026-02-04 00:42:44.417701 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417709 | orchestrator | 2026-02-04 00:42:44.417718 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 00:42:44.417727 | orchestrator | Wednesday 04 February 2026 00:42:43 +0000 (0:00:00.105) 0:00:40.392 **** 2026-02-04 00:42:44.417735 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417744 | orchestrator | 2026-02-04 00:42:44.417752 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 00:42:44.417761 | orchestrator | Wednesday 04 February 2026 00:42:43 +0000 (0:00:00.100) 0:00:40.492 **** 2026-02-04 00:42:44.417770 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:42:44.417779 | orchestrator |  "vgs_report": { 2026-02-04 00:42:44.417787 | orchestrator |  "vg": [] 2026-02-04 00:42:44.417796 | orchestrator |  } 2026-02-04 00:42:44.417805 | orchestrator | } 2026-02-04 00:42:44.417814 | orchestrator | 2026-02-04 00:42:44.417822 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 00:42:44.417831 | orchestrator | Wednesday 04 February 2026 00:42:43 +0000 (0:00:00.132) 0:00:40.625 **** 2026-02-04 00:42:44.417839 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417848 | orchestrator | 2026-02-04 00:42:44.417857 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 00:42:44.417865 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.126) 0:00:40.751 **** 2026-02-04 00:42:44.417874 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417882 | orchestrator | 2026-02-04 00:42:44.417891 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 00:42:44.417900 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.125) 0:00:40.876 **** 2026-02-04 00:42:44.417908 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417917 | orchestrator | 2026-02-04 00:42:44.417925 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 00:42:44.417940 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.132) 0:00:41.009 **** 2026-02-04 00:42:44.417949 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:44.417958 | orchestrator | 2026-02-04 00:42:44.417973 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 00:42:48.632958 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.129) 0:00:41.138 **** 2026-02-04 00:42:48.633089 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633107 | orchestrator | 2026-02-04 00:42:48.633120 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 00:42:48.633132 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.256) 0:00:41.394 **** 2026-02-04 00:42:48.633143 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633154 | orchestrator | 2026-02-04 00:42:48.633166 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 00:42:48.633177 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.122) 0:00:41.517 **** 2026-02-04 00:42:48.633188 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633199 | orchestrator | 2026-02-04 00:42:48.633210 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 00:42:48.633221 | orchestrator | Wednesday 04 February 2026 00:42:44 +0000 (0:00:00.120) 0:00:41.637 **** 2026-02-04 00:42:48.633232 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633243 | orchestrator | 2026-02-04 00:42:48.633254 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 00:42:48.633265 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.129) 0:00:41.766 **** 2026-02-04 00:42:48.633276 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633287 | orchestrator | 2026-02-04 00:42:48.633298 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 00:42:48.633309 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.127) 0:00:41.894 **** 2026-02-04 00:42:48.633320 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633331 | orchestrator | 2026-02-04 00:42:48.633342 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 00:42:48.633354 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.126) 0:00:42.020 **** 2026-02-04 00:42:48.633365 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633376 | orchestrator | 2026-02-04 00:42:48.633386 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 00:42:48.633397 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.129) 0:00:42.149 **** 2026-02-04 00:42:48.633408 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633419 | orchestrator | 2026-02-04 00:42:48.633430 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 00:42:48.633441 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.122) 0:00:42.271 **** 2026-02-04 00:42:48.633452 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633463 | orchestrator | 2026-02-04 00:42:48.633474 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 00:42:48.633485 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.128) 0:00:42.400 **** 2026-02-04 00:42:48.633499 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633512 | orchestrator | 2026-02-04 00:42:48.633525 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 00:42:48.633554 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.128) 0:00:42.528 **** 2026-02-04 00:42:48.633568 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.633608 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.633620 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633633 | orchestrator | 2026-02-04 00:42:48.633646 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 00:42:48.633659 | orchestrator | Wednesday 04 February 2026 00:42:45 +0000 (0:00:00.148) 0:00:42.676 **** 2026-02-04 00:42:48.633672 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.633694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.633707 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633719 | orchestrator | 2026-02-04 00:42:48.633732 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 00:42:48.633745 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.116) 0:00:42.793 **** 2026-02-04 00:42:48.633759 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.633772 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.633785 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633798 | orchestrator | 2026-02-04 00:42:48.633811 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 00:42:48.633823 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.243) 0:00:43.037 **** 2026-02-04 00:42:48.633837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.633850 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.633863 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633875 | orchestrator | 2026-02-04 00:42:48.633905 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 00:42:48.633917 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.138) 0:00:43.175 **** 2026-02-04 00:42:48.633928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.633939 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.633950 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.633961 | orchestrator | 2026-02-04 00:42:48.633972 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 00:42:48.633983 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.138) 0:00:43.313 **** 2026-02-04 00:42:48.633994 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.634006 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.634149 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.634166 | orchestrator | 2026-02-04 00:42:48.634177 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 00:42:48.634188 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.129) 0:00:43.443 **** 2026-02-04 00:42:48.634200 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.634211 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.634222 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.634234 | orchestrator | 2026-02-04 00:42:48.634245 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 00:42:48.634256 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.130) 0:00:43.573 **** 2026-02-04 00:42:48.634267 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.634287 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.634305 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.634316 | orchestrator | 2026-02-04 00:42:48.634327 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 00:42:48.634338 | orchestrator | Wednesday 04 February 2026 00:42:46 +0000 (0:00:00.134) 0:00:43.708 **** 2026-02-04 00:42:48.634350 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:48.634361 | orchestrator | 2026-02-04 00:42:48.634372 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 00:42:48.634383 | orchestrator | Wednesday 04 February 2026 00:42:47 +0000 (0:00:00.491) 0:00:44.199 **** 2026-02-04 00:42:48.634394 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:48.634405 | orchestrator | 2026-02-04 00:42:48.634416 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 00:42:48.634427 | orchestrator | Wednesday 04 February 2026 00:42:47 +0000 (0:00:00.527) 0:00:44.727 **** 2026-02-04 00:42:48.634438 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:42:48.634449 | orchestrator | 2026-02-04 00:42:48.634460 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 00:42:48.634471 | orchestrator | Wednesday 04 February 2026 00:42:48 +0000 (0:00:00.137) 0:00:44.865 **** 2026-02-04 00:42:48.634482 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'vg_name': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}) 2026-02-04 00:42:48.634494 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'vg_name': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}) 2026-02-04 00:42:48.634506 | orchestrator | 2026-02-04 00:42:48.634517 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 00:42:48.634528 | orchestrator | Wednesday 04 February 2026 00:42:48 +0000 (0:00:00.173) 0:00:45.038 **** 2026-02-04 00:42:48.634539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.634550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:48.634561 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:48.634572 | orchestrator | 2026-02-04 00:42:48.634642 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 00:42:48.634655 | orchestrator | Wednesday 04 February 2026 00:42:48 +0000 (0:00:00.165) 0:00:45.204 **** 2026-02-04 00:42:48.634666 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:48.634687 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:54.483035 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:54.483162 | orchestrator | 2026-02-04 00:42:54.483181 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 00:42:54.483196 | orchestrator | Wednesday 04 February 2026 00:42:48 +0000 (0:00:00.154) 0:00:45.358 **** 2026-02-04 00:42:54.483224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'})  2026-02-04 00:42:54.483248 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'})  2026-02-04 00:42:54.483260 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:42:54.483271 | orchestrator | 2026-02-04 00:42:54.483282 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 00:42:54.483319 | orchestrator | Wednesday 04 February 2026 00:42:48 +0000 (0:00:00.144) 0:00:45.503 **** 2026-02-04 00:42:54.483331 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 00:42:54.483343 | orchestrator |  "lvm_report": { 2026-02-04 00:42:54.483355 | orchestrator |  "lv": [ 2026-02-04 00:42:54.483366 | orchestrator |  { 2026-02-04 00:42:54.483377 | orchestrator |  "lv_name": "osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7", 2026-02-04 00:42:54.483389 | orchestrator |  "vg_name": "ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7" 2026-02-04 00:42:54.483400 | orchestrator |  }, 2026-02-04 00:42:54.483411 | orchestrator |  { 2026-02-04 00:42:54.483422 | orchestrator |  "lv_name": "osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd", 2026-02-04 00:42:54.483433 | orchestrator |  "vg_name": "ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd" 2026-02-04 00:42:54.483444 | orchestrator |  } 2026-02-04 00:42:54.483454 | orchestrator |  ], 2026-02-04 00:42:54.483465 | orchestrator |  "pv": [ 2026-02-04 00:42:54.483476 | orchestrator |  { 2026-02-04 00:42:54.483487 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 00:42:54.483498 | orchestrator |  "vg_name": "ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7" 2026-02-04 00:42:54.483509 | orchestrator |  }, 2026-02-04 00:42:54.483520 | orchestrator |  { 2026-02-04 00:42:54.483530 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 00:42:54.483541 | orchestrator |  "vg_name": "ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd" 2026-02-04 00:42:54.483552 | orchestrator |  } 2026-02-04 00:42:54.483563 | orchestrator |  ] 2026-02-04 00:42:54.483574 | orchestrator |  } 2026-02-04 00:42:54.483612 | orchestrator | } 2026-02-04 00:42:54.483624 | orchestrator | 2026-02-04 00:42:54.483635 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 00:42:54.483649 | orchestrator | 2026-02-04 00:42:54.483668 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 00:42:54.483685 | orchestrator | Wednesday 04 February 2026 00:42:49 +0000 (0:00:00.460) 0:00:45.963 **** 2026-02-04 00:42:54.483703 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 00:42:54.483721 | orchestrator | 2026-02-04 00:42:54.483739 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 00:42:54.483757 | orchestrator | Wednesday 04 February 2026 00:42:49 +0000 (0:00:00.245) 0:00:46.208 **** 2026-02-04 00:42:54.483775 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:42:54.483793 | orchestrator | 2026-02-04 00:42:54.483813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.483834 | orchestrator | Wednesday 04 February 2026 00:42:49 +0000 (0:00:00.236) 0:00:46.445 **** 2026-02-04 00:42:54.483855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:42:54.483874 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:42:54.483892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:42:54.483910 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:42:54.483928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:42:54.483947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:42:54.483965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:42:54.483984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:42:54.483998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-04 00:42:54.484009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:42:54.484032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:42:54.484043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:42:54.484054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:42:54.484064 | orchestrator | 2026-02-04 00:42:54.484075 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484091 | orchestrator | Wednesday 04 February 2026 00:42:50 +0000 (0:00:00.404) 0:00:46.849 **** 2026-02-04 00:42:54.484102 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484113 | orchestrator | 2026-02-04 00:42:54.484124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484135 | orchestrator | Wednesday 04 February 2026 00:42:50 +0000 (0:00:00.212) 0:00:47.062 **** 2026-02-04 00:42:54.484146 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484157 | orchestrator | 2026-02-04 00:42:54.484168 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484199 | orchestrator | Wednesday 04 February 2026 00:42:50 +0000 (0:00:00.194) 0:00:47.256 **** 2026-02-04 00:42:54.484210 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484221 | orchestrator | 2026-02-04 00:42:54.484232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484243 | orchestrator | Wednesday 04 February 2026 00:42:50 +0000 (0:00:00.200) 0:00:47.457 **** 2026-02-04 00:42:54.484254 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484264 | orchestrator | 2026-02-04 00:42:54.484275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484292 | orchestrator | Wednesday 04 February 2026 00:42:50 +0000 (0:00:00.194) 0:00:47.651 **** 2026-02-04 00:42:54.484310 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484328 | orchestrator | 2026-02-04 00:42:54.484346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484363 | orchestrator | Wednesday 04 February 2026 00:42:51 +0000 (0:00:00.566) 0:00:48.218 **** 2026-02-04 00:42:54.484381 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484398 | orchestrator | 2026-02-04 00:42:54.484418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484439 | orchestrator | Wednesday 04 February 2026 00:42:51 +0000 (0:00:00.189) 0:00:48.408 **** 2026-02-04 00:42:54.484459 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484477 | orchestrator | 2026-02-04 00:42:54.484497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484516 | orchestrator | Wednesday 04 February 2026 00:42:51 +0000 (0:00:00.205) 0:00:48.613 **** 2026-02-04 00:42:54.484534 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:42:54.484553 | orchestrator | 2026-02-04 00:42:54.484573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484651 | orchestrator | Wednesday 04 February 2026 00:42:52 +0000 (0:00:00.197) 0:00:48.810 **** 2026-02-04 00:42:54.484663 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3) 2026-02-04 00:42:54.484675 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3) 2026-02-04 00:42:54.484687 | orchestrator | 2026-02-04 00:42:54.484697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484708 | orchestrator | Wednesday 04 February 2026 00:42:52 +0000 (0:00:00.438) 0:00:49.249 **** 2026-02-04 00:42:54.484768 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e) 2026-02-04 00:42:54.484781 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e) 2026-02-04 00:42:54.484792 | orchestrator | 2026-02-04 00:42:54.484803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484830 | orchestrator | Wednesday 04 February 2026 00:42:52 +0000 (0:00:00.419) 0:00:49.669 **** 2026-02-04 00:42:54.484841 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a) 2026-02-04 00:42:54.484852 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a) 2026-02-04 00:42:54.484864 | orchestrator | 2026-02-04 00:42:54.484874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484885 | orchestrator | Wednesday 04 February 2026 00:42:53 +0000 (0:00:00.413) 0:00:50.082 **** 2026-02-04 00:42:54.484896 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59) 2026-02-04 00:42:54.484907 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59) 2026-02-04 00:42:54.484918 | orchestrator | 2026-02-04 00:42:54.484929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 00:42:54.484940 | orchestrator | Wednesday 04 February 2026 00:42:53 +0000 (0:00:00.402) 0:00:50.485 **** 2026-02-04 00:42:54.484951 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 00:42:54.484962 | orchestrator | 2026-02-04 00:42:54.484973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:42:54.484984 | orchestrator | Wednesday 04 February 2026 00:42:54 +0000 (0:00:00.314) 0:00:50.799 **** 2026-02-04 00:42:54.484995 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-04 00:42:54.485006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-04 00:42:54.485017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-04 00:42:54.485028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-04 00:42:54.485039 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-04 00:42:54.485049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-04 00:42:54.485060 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-04 00:42:54.485071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-04 00:42:54.485082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-04 00:42:54.485093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-04 00:42:54.485104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-04 00:42:54.485127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-04 00:43:03.396222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-04 00:43:03.396329 | orchestrator | 2026-02-04 00:43:03.396345 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396358 | orchestrator | Wednesday 04 February 2026 00:42:54 +0000 (0:00:00.396) 0:00:51.196 **** 2026-02-04 00:43:03.396370 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396382 | orchestrator | 2026-02-04 00:43:03.396393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396405 | orchestrator | Wednesday 04 February 2026 00:42:54 +0000 (0:00:00.189) 0:00:51.386 **** 2026-02-04 00:43:03.396416 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396427 | orchestrator | 2026-02-04 00:43:03.396438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396450 | orchestrator | Wednesday 04 February 2026 00:42:55 +0000 (0:00:00.619) 0:00:52.005 **** 2026-02-04 00:43:03.396461 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396499 | orchestrator | 2026-02-04 00:43:03.396511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396522 | orchestrator | Wednesday 04 February 2026 00:42:55 +0000 (0:00:00.195) 0:00:52.201 **** 2026-02-04 00:43:03.396533 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396544 | orchestrator | 2026-02-04 00:43:03.396555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396567 | orchestrator | Wednesday 04 February 2026 00:42:55 +0000 (0:00:00.198) 0:00:52.400 **** 2026-02-04 00:43:03.396608 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396619 | orchestrator | 2026-02-04 00:43:03.396630 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396641 | orchestrator | Wednesday 04 February 2026 00:42:55 +0000 (0:00:00.204) 0:00:52.604 **** 2026-02-04 00:43:03.396652 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396663 | orchestrator | 2026-02-04 00:43:03.396674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396685 | orchestrator | Wednesday 04 February 2026 00:42:56 +0000 (0:00:00.192) 0:00:52.797 **** 2026-02-04 00:43:03.396696 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396707 | orchestrator | 2026-02-04 00:43:03.396718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396729 | orchestrator | Wednesday 04 February 2026 00:42:56 +0000 (0:00:00.213) 0:00:53.010 **** 2026-02-04 00:43:03.396740 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396751 | orchestrator | 2026-02-04 00:43:03.396764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396778 | orchestrator | Wednesday 04 February 2026 00:42:56 +0000 (0:00:00.202) 0:00:53.212 **** 2026-02-04 00:43:03.396798 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-04 00:43:03.396837 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-04 00:43:03.396859 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-04 00:43:03.396877 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-04 00:43:03.396896 | orchestrator | 2026-02-04 00:43:03.396914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.396933 | orchestrator | Wednesday 04 February 2026 00:42:57 +0000 (0:00:00.726) 0:00:53.938 **** 2026-02-04 00:43:03.396953 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.396972 | orchestrator | 2026-02-04 00:43:03.396992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.397011 | orchestrator | Wednesday 04 February 2026 00:42:57 +0000 (0:00:00.197) 0:00:54.136 **** 2026-02-04 00:43:03.397032 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397052 | orchestrator | 2026-02-04 00:43:03.397071 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.397084 | orchestrator | Wednesday 04 February 2026 00:42:57 +0000 (0:00:00.205) 0:00:54.341 **** 2026-02-04 00:43:03.397097 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397111 | orchestrator | 2026-02-04 00:43:03.397122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 00:43:03.397133 | orchestrator | Wednesday 04 February 2026 00:42:57 +0000 (0:00:00.189) 0:00:54.530 **** 2026-02-04 00:43:03.397144 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397155 | orchestrator | 2026-02-04 00:43:03.397166 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 00:43:03.397177 | orchestrator | Wednesday 04 February 2026 00:42:58 +0000 (0:00:00.203) 0:00:54.734 **** 2026-02-04 00:43:03.397188 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397199 | orchestrator | 2026-02-04 00:43:03.397210 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 00:43:03.397221 | orchestrator | Wednesday 04 February 2026 00:42:58 +0000 (0:00:00.294) 0:00:55.029 **** 2026-02-04 00:43:03.397232 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}}) 2026-02-04 00:43:03.397255 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5659fb6c-b6d6-5368-9f3c-0e525a1333df'}}) 2026-02-04 00:43:03.397266 | orchestrator | 2026-02-04 00:43:03.397278 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 00:43:03.397289 | orchestrator | Wednesday 04 February 2026 00:42:58 +0000 (0:00:00.207) 0:00:55.237 **** 2026-02-04 00:43:03.397301 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}) 2026-02-04 00:43:03.397313 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'}) 2026-02-04 00:43:03.397324 | orchestrator | 2026-02-04 00:43:03.397336 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 00:43:03.397367 | orchestrator | Wednesday 04 February 2026 00:43:00 +0000 (0:00:01.874) 0:00:57.111 **** 2026-02-04 00:43:03.397379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:03.397392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:03.397403 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397414 | orchestrator | 2026-02-04 00:43:03.397426 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 00:43:03.397437 | orchestrator | Wednesday 04 February 2026 00:43:00 +0000 (0:00:00.152) 0:00:57.263 **** 2026-02-04 00:43:03.397449 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}) 2026-02-04 00:43:03.397460 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'}) 2026-02-04 00:43:03.397471 | orchestrator | 2026-02-04 00:43:03.397483 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 00:43:03.397494 | orchestrator | Wednesday 04 February 2026 00:43:01 +0000 (0:00:01.323) 0:00:58.587 **** 2026-02-04 00:43:03.397505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:03.397520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:03.397539 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397557 | orchestrator | 2026-02-04 00:43:03.397601 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 00:43:03.397622 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.155) 0:00:58.743 **** 2026-02-04 00:43:03.397642 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397660 | orchestrator | 2026-02-04 00:43:03.397677 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 00:43:03.397688 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.145) 0:00:58.888 **** 2026-02-04 00:43:03.397699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:03.397718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:03.397730 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397741 | orchestrator | 2026-02-04 00:43:03.397752 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 00:43:03.397763 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.146) 0:00:59.035 **** 2026-02-04 00:43:03.397783 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397795 | orchestrator | 2026-02-04 00:43:03.397806 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 00:43:03.397817 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.122) 0:00:59.157 **** 2026-02-04 00:43:03.397842 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:03.397854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:03.397866 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397877 | orchestrator | 2026-02-04 00:43:03.397899 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 00:43:03.397910 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.146) 0:00:59.303 **** 2026-02-04 00:43:03.397921 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.397932 | orchestrator | 2026-02-04 00:43:03.397943 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 00:43:03.397954 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.145) 0:00:59.448 **** 2026-02-04 00:43:03.397965 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:03.397977 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:03.397992 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:03.398012 | orchestrator | 2026-02-04 00:43:03.398115 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 00:43:03.398134 | orchestrator | Wednesday 04 February 2026 00:43:02 +0000 (0:00:00.164) 0:00:59.612 **** 2026-02-04 00:43:03.398154 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:03.398174 | orchestrator | 2026-02-04 00:43:03.398193 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 00:43:03.398212 | orchestrator | Wednesday 04 February 2026 00:43:03 +0000 (0:00:00.335) 0:00:59.948 **** 2026-02-04 00:43:03.398244 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:09.607546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:09.607718 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.607738 | orchestrator | 2026-02-04 00:43:09.607751 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 00:43:09.607765 | orchestrator | Wednesday 04 February 2026 00:43:03 +0000 (0:00:00.174) 0:01:00.122 **** 2026-02-04 00:43:09.607776 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:09.607788 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:09.607799 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.607810 | orchestrator | 2026-02-04 00:43:09.607822 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 00:43:09.607833 | orchestrator | Wednesday 04 February 2026 00:43:03 +0000 (0:00:00.143) 0:01:00.265 **** 2026-02-04 00:43:09.607844 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:09.607855 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:09.607893 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.607905 | orchestrator | 2026-02-04 00:43:09.607916 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 00:43:09.607927 | orchestrator | Wednesday 04 February 2026 00:43:03 +0000 (0:00:00.152) 0:01:00.418 **** 2026-02-04 00:43:09.607938 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.607949 | orchestrator | 2026-02-04 00:43:09.607959 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 00:43:09.607970 | orchestrator | Wednesday 04 February 2026 00:43:03 +0000 (0:00:00.140) 0:01:00.558 **** 2026-02-04 00:43:09.607981 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.607992 | orchestrator | 2026-02-04 00:43:09.608003 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 00:43:09.608014 | orchestrator | Wednesday 04 February 2026 00:43:03 +0000 (0:00:00.135) 0:01:00.694 **** 2026-02-04 00:43:09.608043 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608055 | orchestrator | 2026-02-04 00:43:09.608069 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 00:43:09.608083 | orchestrator | Wednesday 04 February 2026 00:43:04 +0000 (0:00:00.125) 0:01:00.819 **** 2026-02-04 00:43:09.608096 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:43:09.608121 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 00:43:09.608134 | orchestrator | } 2026-02-04 00:43:09.608159 | orchestrator | 2026-02-04 00:43:09.608173 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 00:43:09.608186 | orchestrator | Wednesday 04 February 2026 00:43:04 +0000 (0:00:00.149) 0:01:00.969 **** 2026-02-04 00:43:09.608199 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:43:09.608212 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 00:43:09.608225 | orchestrator | } 2026-02-04 00:43:09.608239 | orchestrator | 2026-02-04 00:43:09.608252 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 00:43:09.608265 | orchestrator | Wednesday 04 February 2026 00:43:04 +0000 (0:00:00.138) 0:01:01.107 **** 2026-02-04 00:43:09.608278 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:43:09.608300 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 00:43:09.608312 | orchestrator | } 2026-02-04 00:43:09.608323 | orchestrator | 2026-02-04 00:43:09.608334 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 00:43:09.608356 | orchestrator | Wednesday 04 February 2026 00:43:04 +0000 (0:00:00.127) 0:01:01.234 **** 2026-02-04 00:43:09.608367 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:09.608379 | orchestrator | 2026-02-04 00:43:09.608390 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 00:43:09.608401 | orchestrator | Wednesday 04 February 2026 00:43:05 +0000 (0:00:00.564) 0:01:01.799 **** 2026-02-04 00:43:09.608424 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:09.608435 | orchestrator | 2026-02-04 00:43:09.608446 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 00:43:09.608457 | orchestrator | Wednesday 04 February 2026 00:43:05 +0000 (0:00:00.609) 0:01:02.408 **** 2026-02-04 00:43:09.608468 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:09.608513 | orchestrator | 2026-02-04 00:43:09.608525 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 00:43:09.608547 | orchestrator | Wednesday 04 February 2026 00:43:06 +0000 (0:00:00.711) 0:01:03.119 **** 2026-02-04 00:43:09.608559 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:09.608570 | orchestrator | 2026-02-04 00:43:09.608606 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 00:43:09.608618 | orchestrator | Wednesday 04 February 2026 00:43:06 +0000 (0:00:00.140) 0:01:03.260 **** 2026-02-04 00:43:09.608629 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608640 | orchestrator | 2026-02-04 00:43:09.608651 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 00:43:09.608672 | orchestrator | Wednesday 04 February 2026 00:43:06 +0000 (0:00:00.120) 0:01:03.381 **** 2026-02-04 00:43:09.608683 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608694 | orchestrator | 2026-02-04 00:43:09.608705 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 00:43:09.608716 | orchestrator | Wednesday 04 February 2026 00:43:06 +0000 (0:00:00.107) 0:01:03.489 **** 2026-02-04 00:43:09.608727 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:43:09.608738 | orchestrator |  "vgs_report": { 2026-02-04 00:43:09.608763 | orchestrator |  "vg": [] 2026-02-04 00:43:09.608792 | orchestrator |  } 2026-02-04 00:43:09.608804 | orchestrator | } 2026-02-04 00:43:09.608816 | orchestrator | 2026-02-04 00:43:09.608827 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 00:43:09.608838 | orchestrator | Wednesday 04 February 2026 00:43:06 +0000 (0:00:00.158) 0:01:03.647 **** 2026-02-04 00:43:09.608849 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608860 | orchestrator | 2026-02-04 00:43:09.608871 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 00:43:09.608882 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.124) 0:01:03.771 **** 2026-02-04 00:43:09.608893 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608904 | orchestrator | 2026-02-04 00:43:09.608915 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 00:43:09.608926 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.163) 0:01:03.934 **** 2026-02-04 00:43:09.608937 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608948 | orchestrator | 2026-02-04 00:43:09.608959 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 00:43:09.608970 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.119) 0:01:04.054 **** 2026-02-04 00:43:09.608981 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.608992 | orchestrator | 2026-02-04 00:43:09.609003 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 00:43:09.609026 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.155) 0:01:04.209 **** 2026-02-04 00:43:09.609037 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609048 | orchestrator | 2026-02-04 00:43:09.609059 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 00:43:09.609070 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.124) 0:01:04.334 **** 2026-02-04 00:43:09.609081 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609092 | orchestrator | 2026-02-04 00:43:09.609121 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 00:43:09.609132 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.124) 0:01:04.458 **** 2026-02-04 00:43:09.609144 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609155 | orchestrator | 2026-02-04 00:43:09.609166 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 00:43:09.609177 | orchestrator | Wednesday 04 February 2026 00:43:07 +0000 (0:00:00.145) 0:01:04.603 **** 2026-02-04 00:43:09.609188 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609199 | orchestrator | 2026-02-04 00:43:09.609210 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 00:43:09.609221 | orchestrator | Wednesday 04 February 2026 00:43:08 +0000 (0:00:00.340) 0:01:04.944 **** 2026-02-04 00:43:09.609232 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609243 | orchestrator | 2026-02-04 00:43:09.609259 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 00:43:09.609270 | orchestrator | Wednesday 04 February 2026 00:43:08 +0000 (0:00:00.205) 0:01:05.150 **** 2026-02-04 00:43:09.609282 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609304 | orchestrator | 2026-02-04 00:43:09.609316 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 00:43:09.609326 | orchestrator | Wednesday 04 February 2026 00:43:08 +0000 (0:00:00.175) 0:01:05.326 **** 2026-02-04 00:43:09.609345 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609355 | orchestrator | 2026-02-04 00:43:09.609366 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 00:43:09.609378 | orchestrator | Wednesday 04 February 2026 00:43:08 +0000 (0:00:00.127) 0:01:05.454 **** 2026-02-04 00:43:09.609389 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609400 | orchestrator | 2026-02-04 00:43:09.609411 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 00:43:09.609422 | orchestrator | Wednesday 04 February 2026 00:43:08 +0000 (0:00:00.132) 0:01:05.586 **** 2026-02-04 00:43:09.609433 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609443 | orchestrator | 2026-02-04 00:43:09.609454 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 00:43:09.609465 | orchestrator | Wednesday 04 February 2026 00:43:08 +0000 (0:00:00.130) 0:01:05.717 **** 2026-02-04 00:43:09.609476 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609487 | orchestrator | 2026-02-04 00:43:09.609498 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 00:43:09.609509 | orchestrator | Wednesday 04 February 2026 00:43:09 +0000 (0:00:00.132) 0:01:05.850 **** 2026-02-04 00:43:09.609521 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:09.609532 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:09.609543 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609554 | orchestrator | 2026-02-04 00:43:09.609565 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 00:43:09.609599 | orchestrator | Wednesday 04 February 2026 00:43:09 +0000 (0:00:00.176) 0:01:06.027 **** 2026-02-04 00:43:09.609618 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:09.609637 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:09.609657 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:09.609674 | orchestrator | 2026-02-04 00:43:09.609690 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 00:43:09.609701 | orchestrator | Wednesday 04 February 2026 00:43:09 +0000 (0:00:00.152) 0:01:06.179 **** 2026-02-04 00:43:09.609721 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.377724 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.377817 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.377829 | orchestrator | 2026-02-04 00:43:12.377837 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 00:43:12.377846 | orchestrator | Wednesday 04 February 2026 00:43:09 +0000 (0:00:00.154) 0:01:06.334 **** 2026-02-04 00:43:12.377854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.377870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.377908 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.377916 | orchestrator | 2026-02-04 00:43:12.377923 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 00:43:12.377931 | orchestrator | Wednesday 04 February 2026 00:43:09 +0000 (0:00:00.169) 0:01:06.503 **** 2026-02-04 00:43:12.377957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.377964 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.377971 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.377977 | orchestrator | 2026-02-04 00:43:12.377984 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 00:43:12.377992 | orchestrator | Wednesday 04 February 2026 00:43:09 +0000 (0:00:00.166) 0:01:06.670 **** 2026-02-04 00:43:12.377998 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.378005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.378060 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.378069 | orchestrator | 2026-02-04 00:43:12.378076 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 00:43:12.378083 | orchestrator | Wednesday 04 February 2026 00:43:10 +0000 (0:00:00.334) 0:01:07.005 **** 2026-02-04 00:43:12.378089 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.378096 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.378103 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.378111 | orchestrator | 2026-02-04 00:43:12.378118 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 00:43:12.378124 | orchestrator | Wednesday 04 February 2026 00:43:10 +0000 (0:00:00.137) 0:01:07.142 **** 2026-02-04 00:43:12.378131 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.378138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.378145 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.378152 | orchestrator | 2026-02-04 00:43:12.378159 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 00:43:12.378166 | orchestrator | Wednesday 04 February 2026 00:43:10 +0000 (0:00:00.138) 0:01:07.280 **** 2026-02-04 00:43:12.378173 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:12.378181 | orchestrator | 2026-02-04 00:43:12.378188 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 00:43:12.378195 | orchestrator | Wednesday 04 February 2026 00:43:11 +0000 (0:00:00.485) 0:01:07.765 **** 2026-02-04 00:43:12.378202 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:12.378209 | orchestrator | 2026-02-04 00:43:12.378215 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 00:43:12.378222 | orchestrator | Wednesday 04 February 2026 00:43:11 +0000 (0:00:00.499) 0:01:08.265 **** 2026-02-04 00:43:12.378229 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:12.378236 | orchestrator | 2026-02-04 00:43:12.378243 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 00:43:12.378250 | orchestrator | Wednesday 04 February 2026 00:43:11 +0000 (0:00:00.130) 0:01:08.395 **** 2026-02-04 00:43:12.378257 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'vg_name': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'}) 2026-02-04 00:43:12.378265 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'vg_name': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}) 2026-02-04 00:43:12.378278 | orchestrator | 2026-02-04 00:43:12.378285 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 00:43:12.378292 | orchestrator | Wednesday 04 February 2026 00:43:11 +0000 (0:00:00.145) 0:01:08.541 **** 2026-02-04 00:43:12.378313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.378321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.378328 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.378395 | orchestrator | 2026-02-04 00:43:12.378404 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 00:43:12.378412 | orchestrator | Wednesday 04 February 2026 00:43:11 +0000 (0:00:00.134) 0:01:08.675 **** 2026-02-04 00:43:12.378419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.378426 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.378433 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.378439 | orchestrator | 2026-02-04 00:43:12.378446 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 00:43:12.378453 | orchestrator | Wednesday 04 February 2026 00:43:12 +0000 (0:00:00.140) 0:01:08.816 **** 2026-02-04 00:43:12.378460 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'})  2026-02-04 00:43:12.378467 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'})  2026-02-04 00:43:12.378473 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:12.378480 | orchestrator | 2026-02-04 00:43:12.378487 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 00:43:12.378494 | orchestrator | Wednesday 04 February 2026 00:43:12 +0000 (0:00:00.134) 0:01:08.950 **** 2026-02-04 00:43:12.378501 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 00:43:12.378508 | orchestrator |  "lvm_report": { 2026-02-04 00:43:12.378515 | orchestrator |  "lv": [ 2026-02-04 00:43:12.378522 | orchestrator |  { 2026-02-04 00:43:12.378529 | orchestrator |  "lv_name": "osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df", 2026-02-04 00:43:12.378542 | orchestrator |  "vg_name": "ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df" 2026-02-04 00:43:12.378549 | orchestrator |  }, 2026-02-04 00:43:12.378556 | orchestrator |  { 2026-02-04 00:43:12.378563 | orchestrator |  "lv_name": "osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9", 2026-02-04 00:43:12.378588 | orchestrator |  "vg_name": "ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9" 2026-02-04 00:43:12.378602 | orchestrator |  } 2026-02-04 00:43:12.378609 | orchestrator |  ], 2026-02-04 00:43:12.378616 | orchestrator |  "pv": [ 2026-02-04 00:43:12.378622 | orchestrator |  { 2026-02-04 00:43:12.378630 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 00:43:12.378637 | orchestrator |  "vg_name": "ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9" 2026-02-04 00:43:12.378644 | orchestrator |  }, 2026-02-04 00:43:12.378650 | orchestrator |  { 2026-02-04 00:43:12.378657 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 00:43:12.378664 | orchestrator |  "vg_name": "ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df" 2026-02-04 00:43:12.378671 | orchestrator |  } 2026-02-04 00:43:12.378677 | orchestrator |  ] 2026-02-04 00:43:12.378684 | orchestrator |  } 2026-02-04 00:43:12.378691 | orchestrator | } 2026-02-04 00:43:12.378704 | orchestrator | 2026-02-04 00:43:12.378710 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:43:12.378717 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 00:43:12.378724 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 00:43:12.378731 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 00:43:12.378738 | orchestrator | 2026-02-04 00:43:12.378744 | orchestrator | 2026-02-04 00:43:12.378751 | orchestrator | 2026-02-04 00:43:12.378758 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:43:12.378764 | orchestrator | Wednesday 04 February 2026 00:43:12 +0000 (0:00:00.129) 0:01:09.080 **** 2026-02-04 00:43:12.378771 | orchestrator | =============================================================================== 2026-02-04 00:43:12.378778 | orchestrator | Create block VGs -------------------------------------------------------- 5.65s 2026-02-04 00:43:12.378785 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2026-02-04 00:43:12.378791 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.71s 2026-02-04 00:43:12.378798 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.69s 2026-02-04 00:43:12.378805 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.62s 2026-02-04 00:43:12.378811 | orchestrator | Add known partitions to the list of available block devices ------------- 1.61s 2026-02-04 00:43:12.378818 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2026-02-04 00:43:12.378825 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.45s 2026-02-04 00:43:12.378837 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2026-02-04 00:43:12.642817 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2026-02-04 00:43:12.642904 | orchestrator | Print LVM report data --------------------------------------------------- 0.88s 2026-02-04 00:43:12.642922 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-02-04 00:43:12.642938 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2026-02-04 00:43:12.642952 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-02-04 00:43:12.642967 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-02-04 00:43:12.642981 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2026-02-04 00:43:12.642995 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-04 00:43:12.643009 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.65s 2026-02-04 00:43:12.643023 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-02-04 00:43:12.643037 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2026-02-04 00:43:24.631504 | orchestrator | 2026-02-04 00:43:24 | INFO  | Task 1f6c17b9-5c62-44aa-b3d2-3f3bcb6b5143 (facts) was prepared for execution. 2026-02-04 00:43:24.631715 | orchestrator | 2026-02-04 00:43:24 | INFO  | It takes a moment until task 1f6c17b9-5c62-44aa-b3d2-3f3bcb6b5143 (facts) has been started and output is visible here. 2026-02-04 00:43:36.195339 | orchestrator | 2026-02-04 00:43:36.195508 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 00:43:36.195542 | orchestrator | 2026-02-04 00:43:36.195679 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 00:43:36.195707 | orchestrator | Wednesday 04 February 2026 00:43:28 +0000 (0:00:00.237) 0:00:00.237 **** 2026-02-04 00:43:36.195761 | orchestrator | ok: [testbed-manager] 2026-02-04 00:43:36.195783 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:43:36.195801 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:43:36.195819 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:43:36.195836 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:43:36.195854 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:43:36.195873 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:36.195888 | orchestrator | 2026-02-04 00:43:36.195904 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 00:43:36.195941 | orchestrator | Wednesday 04 February 2026 00:43:29 +0000 (0:00:01.173) 0:00:01.411 **** 2026-02-04 00:43:36.195964 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:43:36.195985 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:43:36.196002 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:43:36.196023 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:43:36.196044 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:43:36.196064 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:43:36.196083 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:36.196103 | orchestrator | 2026-02-04 00:43:36.196120 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 00:43:36.196139 | orchestrator | 2026-02-04 00:43:36.196158 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 00:43:36.196176 | orchestrator | Wednesday 04 February 2026 00:43:30 +0000 (0:00:01.101) 0:00:02.512 **** 2026-02-04 00:43:36.196195 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:43:36.196213 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:43:36.196231 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:43:36.196250 | orchestrator | ok: [testbed-manager] 2026-02-04 00:43:36.196267 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:43:36.196283 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:43:36.196300 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:43:36.196319 | orchestrator | 2026-02-04 00:43:36.196337 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 00:43:36.196355 | orchestrator | 2026-02-04 00:43:36.196374 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 00:43:36.196391 | orchestrator | Wednesday 04 February 2026 00:43:35 +0000 (0:00:04.820) 0:00:07.332 **** 2026-02-04 00:43:36.196409 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:43:36.196426 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:43:36.196444 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:43:36.196461 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:43:36.196478 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:43:36.196497 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:43:36.196515 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:43:36.196533 | orchestrator | 2026-02-04 00:43:36.196550 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:43:36.196611 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196634 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196652 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196670 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196688 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196707 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196725 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:43:36.196765 | orchestrator | 2026-02-04 00:43:36.196787 | orchestrator | 2026-02-04 00:43:36.196807 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:43:36.196824 | orchestrator | Wednesday 04 February 2026 00:43:35 +0000 (0:00:00.451) 0:00:07.784 **** 2026-02-04 00:43:36.196842 | orchestrator | =============================================================================== 2026-02-04 00:43:36.196860 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.82s 2026-02-04 00:43:36.196879 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.17s 2026-02-04 00:43:36.196897 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2026-02-04 00:43:36.196915 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2026-02-04 00:43:48.146781 | orchestrator | 2026-02-04 00:43:48 | INFO  | Task 3fa179d4-3e58-4c24-bb05-79c18700dd67 (frr) was prepared for execution. 2026-02-04 00:43:48.146885 | orchestrator | 2026-02-04 00:43:48 | INFO  | It takes a moment until task 3fa179d4-3e58-4c24-bb05-79c18700dd67 (frr) has been started and output is visible here. 2026-02-04 00:44:11.563798 | orchestrator | 2026-02-04 00:44:11.563886 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-04 00:44:11.563898 | orchestrator | 2026-02-04 00:44:11.563906 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-04 00:44:11.563975 | orchestrator | Wednesday 04 February 2026 00:43:51 +0000 (0:00:00.170) 0:00:00.170 **** 2026-02-04 00:44:11.563989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:44:11.564004 | orchestrator | 2026-02-04 00:44:11.564017 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-04 00:44:11.564029 | orchestrator | Wednesday 04 February 2026 00:43:52 +0000 (0:00:00.173) 0:00:00.344 **** 2026-02-04 00:44:11.564042 | orchestrator | changed: [testbed-manager] 2026-02-04 00:44:11.564056 | orchestrator | 2026-02-04 00:44:11.564068 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-04 00:44:11.564080 | orchestrator | Wednesday 04 February 2026 00:43:53 +0000 (0:00:01.083) 0:00:01.427 **** 2026-02-04 00:44:11.564093 | orchestrator | changed: [testbed-manager] 2026-02-04 00:44:11.564105 | orchestrator | 2026-02-04 00:44:11.564117 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-04 00:44:11.564125 | orchestrator | Wednesday 04 February 2026 00:44:01 +0000 (0:00:08.786) 0:00:10.214 **** 2026-02-04 00:44:11.564133 | orchestrator | ok: [testbed-manager] 2026-02-04 00:44:11.564141 | orchestrator | 2026-02-04 00:44:11.564149 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-04 00:44:11.564156 | orchestrator | Wednesday 04 February 2026 00:44:02 +0000 (0:00:00.991) 0:00:11.206 **** 2026-02-04 00:44:11.564164 | orchestrator | changed: [testbed-manager] 2026-02-04 00:44:11.564171 | orchestrator | 2026-02-04 00:44:11.564179 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-04 00:44:11.564191 | orchestrator | Wednesday 04 February 2026 00:44:03 +0000 (0:00:00.904) 0:00:12.110 **** 2026-02-04 00:44:11.564203 | orchestrator | ok: [testbed-manager] 2026-02-04 00:44:11.564215 | orchestrator | 2026-02-04 00:44:11.564227 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-04 00:44:11.564241 | orchestrator | Wednesday 04 February 2026 00:44:04 +0000 (0:00:01.162) 0:00:13.273 **** 2026-02-04 00:44:11.564254 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:44:11.564267 | orchestrator | 2026-02-04 00:44:11.564279 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-04 00:44:11.564293 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.132) 0:00:13.406 **** 2026-02-04 00:44:11.564325 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:44:11.564354 | orchestrator | 2026-02-04 00:44:11.564364 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-04 00:44:11.564372 | orchestrator | Wednesday 04 February 2026 00:44:05 +0000 (0:00:00.147) 0:00:13.553 **** 2026-02-04 00:44:11.564381 | orchestrator | changed: [testbed-manager] 2026-02-04 00:44:11.564389 | orchestrator | 2026-02-04 00:44:11.564401 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-04 00:44:11.564415 | orchestrator | Wednesday 04 February 2026 00:44:06 +0000 (0:00:00.963) 0:00:14.517 **** 2026-02-04 00:44:11.564428 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-04 00:44:11.564442 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-04 00:44:11.564458 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-04 00:44:11.564473 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-04 00:44:11.564487 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-04 00:44:11.564502 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-04 00:44:11.564515 | orchestrator | 2026-02-04 00:44:11.564528 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-04 00:44:11.564541 | orchestrator | Wednesday 04 February 2026 00:44:08 +0000 (0:00:02.186) 0:00:16.704 **** 2026-02-04 00:44:11.564576 | orchestrator | ok: [testbed-manager] 2026-02-04 00:44:11.564592 | orchestrator | 2026-02-04 00:44:11.564607 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-04 00:44:11.564620 | orchestrator | Wednesday 04 February 2026 00:44:10 +0000 (0:00:01.651) 0:00:18.356 **** 2026-02-04 00:44:11.564633 | orchestrator | changed: [testbed-manager] 2026-02-04 00:44:11.564647 | orchestrator | 2026-02-04 00:44:11.564661 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:44:11.564675 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:44:11.564690 | orchestrator | 2026-02-04 00:44:11.564703 | orchestrator | 2026-02-04 00:44:11.564717 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:44:11.564731 | orchestrator | Wednesday 04 February 2026 00:44:11 +0000 (0:00:01.340) 0:00:19.696 **** 2026-02-04 00:44:11.564745 | orchestrator | =============================================================================== 2026-02-04 00:44:11.564758 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.79s 2026-02-04 00:44:11.564770 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.19s 2026-02-04 00:44:11.564782 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.65s 2026-02-04 00:44:11.564795 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2026-02-04 00:44:11.564807 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.16s 2026-02-04 00:44:11.564845 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.08s 2026-02-04 00:44:11.564860 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-02-04 00:44:11.564872 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.96s 2026-02-04 00:44:11.564884 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.90s 2026-02-04 00:44:11.564898 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2026-02-04 00:44:11.564906 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-02-04 00:44:11.564914 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-02-04 00:44:11.759215 | orchestrator | 2026-02-04 00:44:11.760177 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Feb 4 00:44:11 UTC 2026 2026-02-04 00:44:11.760222 | orchestrator | 2026-02-04 00:44:13.422299 | orchestrator | 2026-02-04 00:44:13 | INFO  | Collection nutshell is prepared for execution 2026-02-04 00:44:13.422399 | orchestrator | 2026-02-04 00:44:13 | INFO  | A [0] - dotfiles 2026-02-04 00:44:23.498639 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - homer 2026-02-04 00:44:23.498762 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - netdata 2026-02-04 00:44:23.498782 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - openstackclient 2026-02-04 00:44:23.498795 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - phpmyadmin 2026-02-04 00:44:23.498808 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - common 2026-02-04 00:44:23.502523 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- loadbalancer 2026-02-04 00:44:23.502670 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [2] --- opensearch 2026-02-04 00:44:23.502848 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [2] --- mariadb-ng 2026-02-04 00:44:23.503236 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [3] ---- horizon 2026-02-04 00:44:23.503470 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [3] ---- keystone 2026-02-04 00:44:23.503675 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- neutron 2026-02-04 00:44:23.504165 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [5] ------ wait-for-nova 2026-02-04 00:44:23.504180 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [6] ------- octavia 2026-02-04 00:44:23.506201 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- barbican 2026-02-04 00:44:23.506285 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- designate 2026-02-04 00:44:23.506532 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- ironic 2026-02-04 00:44:23.506599 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- placement 2026-02-04 00:44:23.506671 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- magnum 2026-02-04 00:44:23.507738 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- openvswitch 2026-02-04 00:44:23.507789 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [2] --- ovn 2026-02-04 00:44:23.508084 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- memcached 2026-02-04 00:44:23.508157 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- redis 2026-02-04 00:44:23.508249 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- rabbitmq-ng 2026-02-04 00:44:23.508726 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - kubernetes 2026-02-04 00:44:23.511424 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- kubeconfig 2026-02-04 00:44:23.511452 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- copy-kubeconfig 2026-02-04 00:44:23.511796 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [0] - ceph 2026-02-04 00:44:23.514512 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [1] -- ceph-pools 2026-02-04 00:44:23.514609 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [2] --- copy-ceph-keys 2026-02-04 00:44:23.514633 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [3] ---- cephclient 2026-02-04 00:44:23.514653 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-04 00:44:23.514783 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- wait-for-keystone 2026-02-04 00:44:23.514818 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-04 00:44:23.514837 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [5] ------ glance 2026-02-04 00:44:23.514856 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [5] ------ cinder 2026-02-04 00:44:23.514914 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [5] ------ nova 2026-02-04 00:44:23.515414 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [4] ----- prometheus 2026-02-04 00:44:23.515452 | orchestrator | 2026-02-04 00:44:23 | INFO  | A [5] ------ grafana 2026-02-04 00:44:23.687349 | orchestrator | 2026-02-04 00:44:23 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-04 00:44:23.687457 | orchestrator | 2026-02-04 00:44:23 | INFO  | Tasks are running in the background 2026-02-04 00:44:26.319337 | orchestrator | 2026-02-04 00:44:26 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-04 00:44:28.413826 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:28.422513 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:28.422839 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:28.423345 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:28.423865 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:28.424438 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:28.425405 | orchestrator | 2026-02-04 00:44:28 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:28.425460 | orchestrator | 2026-02-04 00:44:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:31.492310 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:31.492434 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:31.492460 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:31.492476 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:31.492491 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:31.492506 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:31.492524 | orchestrator | 2026-02-04 00:44:31 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:31.492542 | orchestrator | 2026-02-04 00:44:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:34.511911 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:34.514537 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:34.514817 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:34.515377 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:34.518227 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:34.518772 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:34.520903 | orchestrator | 2026-02-04 00:44:34 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:34.520994 | orchestrator | 2026-02-04 00:44:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:37.559350 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:37.559492 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:37.561142 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:37.565794 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:37.565872 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:37.568191 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:37.569022 | orchestrator | 2026-02-04 00:44:37 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:37.571062 | orchestrator | 2026-02-04 00:44:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:40.609769 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:40.609855 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:40.609863 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:40.610260 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:40.610874 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:40.611434 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:40.611914 | orchestrator | 2026-02-04 00:44:40 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:40.612717 | orchestrator | 2026-02-04 00:44:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:43.787071 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:43.789715 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:43.791541 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:43.792839 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:43.795018 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:43.796654 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:43.798147 | orchestrator | 2026-02-04 00:44:43 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:43.798180 | orchestrator | 2026-02-04 00:44:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:46.864927 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:46.866256 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:46.869257 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:46.869870 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:46.871033 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:46.872738 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state STARTED 2026-02-04 00:44:46.879682 | orchestrator | 2026-02-04 00:44:46 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:46.879758 | orchestrator | 2026-02-04 00:44:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:49.950192 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:49.950350 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:44:49.950374 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:49.953072 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:49.953778 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:49.954799 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:49.955619 | orchestrator | 2026-02-04 00:44:49.955646 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-04 00:44:49.955654 | orchestrator | 2026-02-04 00:44:49.955660 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-04 00:44:49.955666 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:00.275) 0:00:00.275 **** 2026-02-04 00:44:49.955672 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:44:49.955679 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:44:49.955685 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:44:49.955690 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:44:49.955696 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:44:49.955702 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:44:49.955707 | orchestrator | changed: [testbed-manager] 2026-02-04 00:44:49.955712 | orchestrator | 2026-02-04 00:44:49.955718 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-04 00:44:49.955724 | orchestrator | Wednesday 04 February 2026 00:44:38 +0000 (0:00:03.521) 0:00:03.797 **** 2026-02-04 00:44:49.955730 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-04 00:44:49.955736 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-04 00:44:49.955742 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-04 00:44:49.955748 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-04 00:44:49.955754 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-04 00:44:49.955759 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-04 00:44:49.955764 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-04 00:44:49.955770 | orchestrator | 2026-02-04 00:44:49.955775 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-04 00:44:49.955781 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:01.352) 0:00:05.150 **** 2026-02-04 00:44:49.955793 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:39.680631', 'end': '2026-02-04 00:44:39.684186', 'delta': '0:00:00.003555', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.955901 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:39.782074', 'end': '2026-02-04 00:44:39.792547', 'delta': '0:00:00.010473', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.955912 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:39.754561', 'end': '2026-02-04 00:44:39.762274', 'delta': '0:00:00.007713', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.955938 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:39.749736', 'end': '2026-02-04 00:44:39.758713', 'delta': '0:00:00.008977', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.955947 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:39.791194', 'end': '2026-02-04 00:44:39.799251', 'delta': '0:00:00.008057', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.955955 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:40.040131', 'end': '2026-02-04 00:44:40.049853', 'delta': '0:00:00.009722', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.956154 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-04 00:44:39.801049', 'end': '2026-02-04 00:44:39.806708', 'delta': '0:00:00.005659', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-04 00:44:49.956162 | orchestrator | 2026-02-04 00:44:49.956167 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-04 00:44:49.956173 | orchestrator | Wednesday 04 February 2026 00:44:42 +0000 (0:00:02.472) 0:00:07.622 **** 2026-02-04 00:44:49.956178 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-04 00:44:49.956183 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-04 00:44:49.956187 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-04 00:44:49.956192 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-04 00:44:49.956196 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-04 00:44:49.956201 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-04 00:44:49.956206 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-04 00:44:49.956210 | orchestrator | 2026-02-04 00:44:49.956215 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-04 00:44:49.956220 | orchestrator | Wednesday 04 February 2026 00:44:45 +0000 (0:00:02.287) 0:00:09.909 **** 2026-02-04 00:44:49.956224 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-04 00:44:49.956229 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-04 00:44:49.956234 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-04 00:44:49.956238 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-04 00:44:49.956243 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-04 00:44:49.956247 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-04 00:44:49.956252 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-04 00:44:49.956256 | orchestrator | 2026-02-04 00:44:49.956261 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:44:49.956272 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956278 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956283 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956287 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956292 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956302 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956306 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:44:49.956311 | orchestrator | 2026-02-04 00:44:49.956316 | orchestrator | 2026-02-04 00:44:49.956320 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:44:49.956327 | orchestrator | Wednesday 04 February 2026 00:44:46 +0000 (0:00:01.619) 0:00:11.529 **** 2026-02-04 00:44:49.956332 | orchestrator | =============================================================================== 2026-02-04 00:44:49.956337 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.52s 2026-02-04 00:44:49.956341 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.47s 2026-02-04 00:44:49.956348 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.29s 2026-02-04 00:44:49.956355 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 1.62s 2026-02-04 00:44:49.956363 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.35s 2026-02-04 00:44:49.956371 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task 1b7201e5-58fd-44a4-bc37-eef8894c4ee8 is in state SUCCESS 2026-02-04 00:44:49.956377 | orchestrator | 2026-02-04 00:44:49 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:49.956385 | orchestrator | 2026-02-04 00:44:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:52.984934 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:52.985947 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:44:52.986791 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:52.987511 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:52.988322 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:52.988963 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:52.989831 | orchestrator | 2026-02-04 00:44:52 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:52.989857 | orchestrator | 2026-02-04 00:44:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:56.031254 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:56.031335 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:44:56.031980 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:56.032850 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:56.033691 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:56.034823 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:56.035412 | orchestrator | 2026-02-04 00:44:56 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:56.035456 | orchestrator | 2026-02-04 00:44:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:44:59.111061 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:44:59.111168 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:44:59.111186 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:44:59.111493 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:44:59.112113 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:44:59.112531 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:44:59.113167 | orchestrator | 2026-02-04 00:44:59 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:44:59.113319 | orchestrator | 2026-02-04 00:44:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:02.165189 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:02.165504 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:02.167609 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:02.167665 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:02.168355 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:45:02.169410 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:02.170427 | orchestrator | 2026-02-04 00:45:02 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:45:02.170459 | orchestrator | 2026-02-04 00:45:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:05.282233 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:05.282305 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:05.282311 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:05.282316 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:05.282320 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:45:05.282324 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:05.282328 | orchestrator | 2026-02-04 00:45:05 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:45:05.282333 | orchestrator | 2026-02-04 00:45:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:08.308780 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:08.308850 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:08.308856 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:08.308861 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:08.308884 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state STARTED 2026-02-04 00:45:08.308888 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:08.308892 | orchestrator | 2026-02-04 00:45:08 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:45:08.308897 | orchestrator | 2026-02-04 00:45:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:11.687710 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:11.687783 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:11.687791 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:11.687797 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:11.687802 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task 669fe5c8-a9ce-4465-8683-eab847f03453 is in state SUCCESS 2026-02-04 00:45:11.687807 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:11.687812 | orchestrator | 2026-02-04 00:45:11 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:45:11.687816 | orchestrator | 2026-02-04 00:45:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:14.577495 | orchestrator | 2026-02-04 00:45:14 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:14.578330 | orchestrator | 2026-02-04 00:45:14 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:14.578721 | orchestrator | 2026-02-04 00:45:14 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:14.579306 | orchestrator | 2026-02-04 00:45:14 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:14.580099 | orchestrator | 2026-02-04 00:45:14 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:14.580944 | orchestrator | 2026-02-04 00:45:14 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:45:14.581081 | orchestrator | 2026-02-04 00:45:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:17.621771 | orchestrator | 2026-02-04 00:45:17 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:17.622076 | orchestrator | 2026-02-04 00:45:17 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:17.622976 | orchestrator | 2026-02-04 00:45:17 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:17.624246 | orchestrator | 2026-02-04 00:45:17 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:17.625293 | orchestrator | 2026-02-04 00:45:17 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:17.627688 | orchestrator | 2026-02-04 00:45:17 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state STARTED 2026-02-04 00:45:17.627777 | orchestrator | 2026-02-04 00:45:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:20.683035 | orchestrator | 2026-02-04 00:45:20 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:20.683426 | orchestrator | 2026-02-04 00:45:20 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:20.684481 | orchestrator | 2026-02-04 00:45:20 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:20.685794 | orchestrator | 2026-02-04 00:45:20 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:20.687275 | orchestrator | 2026-02-04 00:45:20 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:20.687564 | orchestrator | 2026-02-04 00:45:20 | INFO  | Task 1b3df390-8282-4caf-a2df-af0d96a58c80 is in state SUCCESS 2026-02-04 00:45:20.687709 | orchestrator | 2026-02-04 00:45:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:23.723016 | orchestrator | 2026-02-04 00:45:23 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:23.725047 | orchestrator | 2026-02-04 00:45:23 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:23.727641 | orchestrator | 2026-02-04 00:45:23 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:23.729775 | orchestrator | 2026-02-04 00:45:23 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:23.731255 | orchestrator | 2026-02-04 00:45:23 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:23.731303 | orchestrator | 2026-02-04 00:45:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:26.778756 | orchestrator | 2026-02-04 00:45:26 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:26.782502 | orchestrator | 2026-02-04 00:45:26 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:26.785350 | orchestrator | 2026-02-04 00:45:26 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:26.787793 | orchestrator | 2026-02-04 00:45:26 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:26.791638 | orchestrator | 2026-02-04 00:45:26 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:26.792424 | orchestrator | 2026-02-04 00:45:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:29.854892 | orchestrator | 2026-02-04 00:45:29 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:29.854983 | orchestrator | 2026-02-04 00:45:29 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:29.854994 | orchestrator | 2026-02-04 00:45:29 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:29.855001 | orchestrator | 2026-02-04 00:45:29 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:29.855009 | orchestrator | 2026-02-04 00:45:29 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:29.855014 | orchestrator | 2026-02-04 00:45:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:32.962588 | orchestrator | 2026-02-04 00:45:32 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:32.962725 | orchestrator | 2026-02-04 00:45:32 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:32.962760 | orchestrator | 2026-02-04 00:45:32 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:32.963439 | orchestrator | 2026-02-04 00:45:32 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:32.964111 | orchestrator | 2026-02-04 00:45:32 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:32.964129 | orchestrator | 2026-02-04 00:45:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:36.090248 | orchestrator | 2026-02-04 00:45:36 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:36.090355 | orchestrator | 2026-02-04 00:45:36 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:36.091172 | orchestrator | 2026-02-04 00:45:36 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:36.092599 | orchestrator | 2026-02-04 00:45:36 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:36.094402 | orchestrator | 2026-02-04 00:45:36 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:36.094460 | orchestrator | 2026-02-04 00:45:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:39.148464 | orchestrator | 2026-02-04 00:45:39 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:39.148856 | orchestrator | 2026-02-04 00:45:39 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:39.148881 | orchestrator | 2026-02-04 00:45:39 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:39.148898 | orchestrator | 2026-02-04 00:45:39 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:39.149604 | orchestrator | 2026-02-04 00:45:39 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:39.149653 | orchestrator | 2026-02-04 00:45:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:42.184673 | orchestrator | 2026-02-04 00:45:42 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:42.186765 | orchestrator | 2026-02-04 00:45:42 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:42.191200 | orchestrator | 2026-02-04 00:45:42 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:42.192511 | orchestrator | 2026-02-04 00:45:42 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:42.194638 | orchestrator | 2026-02-04 00:45:42 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:42.195122 | orchestrator | 2026-02-04 00:45:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:45.244506 | orchestrator | 2026-02-04 00:45:45 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:45.250343 | orchestrator | 2026-02-04 00:45:45 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:45.252268 | orchestrator | 2026-02-04 00:45:45 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:45.253757 | orchestrator | 2026-02-04 00:45:45 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:45.254809 | orchestrator | 2026-02-04 00:45:45 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:45.254861 | orchestrator | 2026-02-04 00:45:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:48.297873 | orchestrator | 2026-02-04 00:45:48 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:48.299484 | orchestrator | 2026-02-04 00:45:48 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:48.301352 | orchestrator | 2026-02-04 00:45:48 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:48.303091 | orchestrator | 2026-02-04 00:45:48 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:48.305296 | orchestrator | 2026-02-04 00:45:48 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:48.305341 | orchestrator | 2026-02-04 00:45:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:51.357175 | orchestrator | 2026-02-04 00:45:51 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:51.360961 | orchestrator | 2026-02-04 00:45:51 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:51.361256 | orchestrator | 2026-02-04 00:45:51 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state STARTED 2026-02-04 00:45:51.362258 | orchestrator | 2026-02-04 00:45:51 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:51.362988 | orchestrator | 2026-02-04 00:45:51 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:51.363058 | orchestrator | 2026-02-04 00:45:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:54.416969 | orchestrator | 2026-02-04 00:45:54 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:54.417122 | orchestrator | 2026-02-04 00:45:54 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:54.417188 | orchestrator | 2026-02-04 00:45:54 | INFO  | Task b4f08f5a-6e95-4e3d-83f6-db5cc3f44223 is in state SUCCESS 2026-02-04 00:45:54.420081 | orchestrator | 2026-02-04 00:45:54.420148 | orchestrator | 2026-02-04 00:45:54.420158 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-04 00:45:54.420168 | orchestrator | 2026-02-04 00:45:54.420179 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-04 00:45:54.420191 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:00.200) 0:00:00.200 **** 2026-02-04 00:45:54.420201 | orchestrator | ok: [testbed-manager] => { 2026-02-04 00:45:54.420214 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-04 00:45:54.420226 | orchestrator | } 2026-02-04 00:45:54.420238 | orchestrator | 2026-02-04 00:45:54.420245 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-04 00:45:54.420251 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:00.209) 0:00:00.410 **** 2026-02-04 00:45:54.420258 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.420266 | orchestrator | 2026-02-04 00:45:54.420273 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-04 00:45:54.420284 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:01.292) 0:00:01.703 **** 2026-02-04 00:45:54.420295 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-04 00:45:54.420305 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-04 00:45:54.420313 | orchestrator | 2026-02-04 00:45:54.420320 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-04 00:45:54.420331 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:01.947) 0:00:03.650 **** 2026-02-04 00:45:54.420344 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.420354 | orchestrator | 2026-02-04 00:45:54.420366 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-04 00:45:54.420382 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:02.491) 0:00:06.141 **** 2026-02-04 00:45:54.420392 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.420402 | orchestrator | 2026-02-04 00:45:54.420413 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-04 00:45:54.420424 | orchestrator | Wednesday 04 February 2026 00:44:41 +0000 (0:00:01.100) 0:00:07.242 **** 2026-02-04 00:45:54.420432 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-04 00:45:54.420463 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.420474 | orchestrator | 2026-02-04 00:45:54.420484 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-04 00:45:54.420496 | orchestrator | Wednesday 04 February 2026 00:45:07 +0000 (0:00:26.206) 0:00:33.448 **** 2026-02-04 00:45:54.420503 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.420513 | orchestrator | 2026-02-04 00:45:54.420523 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:45:54.420559 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.420571 | orchestrator | 2026-02-04 00:45:54.420581 | orchestrator | 2026-02-04 00:45:54.420592 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:45:54.420602 | orchestrator | Wednesday 04 February 2026 00:45:09 +0000 (0:00:01.941) 0:00:35.389 **** 2026-02-04 00:45:54.420612 | orchestrator | =============================================================================== 2026-02-04 00:45:54.420623 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.21s 2026-02-04 00:45:54.420661 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.49s 2026-02-04 00:45:54.420669 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.95s 2026-02-04 00:45:54.420677 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.94s 2026-02-04 00:45:54.420685 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.29s 2026-02-04 00:45:54.420692 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.10s 2026-02-04 00:45:54.420700 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.21s 2026-02-04 00:45:54.420707 | orchestrator | 2026-02-04 00:45:54.420714 | orchestrator | 2026-02-04 00:45:54.420722 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-04 00:45:54.420731 | orchestrator | 2026-02-04 00:45:54.420742 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-04 00:45:54.420752 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:00.386) 0:00:00.386 **** 2026-02-04 00:45:54.420796 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-04 00:45:54.420807 | orchestrator | 2026-02-04 00:45:54.420822 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-04 00:45:54.420833 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:00.458) 0:00:00.844 **** 2026-02-04 00:45:54.420844 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-04 00:45:54.420856 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-04 00:45:54.420867 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-04 00:45:54.420878 | orchestrator | 2026-02-04 00:45:54.420888 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-04 00:45:54.420896 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:02.074) 0:00:02.919 **** 2026-02-04 00:45:54.420904 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.420911 | orchestrator | 2026-02-04 00:45:54.420918 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-04 00:45:54.420929 | orchestrator | Wednesday 04 February 2026 00:44:39 +0000 (0:00:02.458) 0:00:05.378 **** 2026-02-04 00:45:54.420957 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-04 00:45:54.420967 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.420975 | orchestrator | 2026-02-04 00:45:54.420985 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-04 00:45:54.420995 | orchestrator | Wednesday 04 February 2026 00:45:12 +0000 (0:00:32.661) 0:00:38.040 **** 2026-02-04 00:45:54.421005 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421027 | orchestrator | 2026-02-04 00:45:54.421034 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-04 00:45:54.421040 | orchestrator | Wednesday 04 February 2026 00:45:13 +0000 (0:00:01.229) 0:00:39.269 **** 2026-02-04 00:45:54.421046 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.421052 | orchestrator | 2026-02-04 00:45:54.421059 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-04 00:45:54.421065 | orchestrator | Wednesday 04 February 2026 00:45:14 +0000 (0:00:01.204) 0:00:40.474 **** 2026-02-04 00:45:54.421072 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421078 | orchestrator | 2026-02-04 00:45:54.421135 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-04 00:45:54.421156 | orchestrator | Wednesday 04 February 2026 00:45:16 +0000 (0:00:02.194) 0:00:42.668 **** 2026-02-04 00:45:54.421170 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421176 | orchestrator | 2026-02-04 00:45:54.421183 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-04 00:45:54.421189 | orchestrator | Wednesday 04 February 2026 00:45:17 +0000 (0:00:00.651) 0:00:43.319 **** 2026-02-04 00:45:54.421195 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421201 | orchestrator | 2026-02-04 00:45:54.421208 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-04 00:45:54.421214 | orchestrator | Wednesday 04 February 2026 00:45:18 +0000 (0:00:00.453) 0:00:43.773 **** 2026-02-04 00:45:54.421220 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.421227 | orchestrator | 2026-02-04 00:45:54.421233 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:45:54.421239 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.421247 | orchestrator | 2026-02-04 00:45:54.421253 | orchestrator | 2026-02-04 00:45:54.421259 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:45:54.421265 | orchestrator | Wednesday 04 February 2026 00:45:18 +0000 (0:00:00.341) 0:00:44.115 **** 2026-02-04 00:45:54.421272 | orchestrator | =============================================================================== 2026-02-04 00:45:54.421278 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.66s 2026-02-04 00:45:54.421284 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.46s 2026-02-04 00:45:54.421291 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.19s 2026-02-04 00:45:54.421297 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.07s 2026-02-04 00:45:54.421303 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.23s 2026-02-04 00:45:54.421309 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.20s 2026-02-04 00:45:54.421316 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.65s 2026-02-04 00:45:54.421322 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.46s 2026-02-04 00:45:54.421328 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.45s 2026-02-04 00:45:54.421334 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2026-02-04 00:45:54.421342 | orchestrator | 2026-02-04 00:45:54.421352 | orchestrator | 2026-02-04 00:45:54.421362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:45:54.421372 | orchestrator | 2026-02-04 00:45:54.421382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:45:54.421393 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:00.384) 0:00:00.384 **** 2026-02-04 00:45:54.421400 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-04 00:45:54.421406 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-04 00:45:54.421413 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-04 00:45:54.421425 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-04 00:45:54.421431 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-04 00:45:54.421437 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-04 00:45:54.421444 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-04 00:45:54.421450 | orchestrator | 2026-02-04 00:45:54.421460 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-04 00:45:54.421466 | orchestrator | 2026-02-04 00:45:54.421473 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-04 00:45:54.421479 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:02.215) 0:00:02.600 **** 2026-02-04 00:45:54.421494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:45:54.421503 | orchestrator | 2026-02-04 00:45:54.421509 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-04 00:45:54.421515 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:01.083) 0:00:03.683 **** 2026-02-04 00:45:54.421522 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:45:54.421528 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:45:54.421557 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:45:54.421568 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.421578 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:45:54.421596 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:45:54.421607 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:45:54.421617 | orchestrator | 2026-02-04 00:45:54.421629 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-04 00:45:54.421639 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:02.683) 0:00:06.366 **** 2026-02-04 00:45:54.421648 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:45:54.421655 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:45:54.421661 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:45:54.421667 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:45:54.421673 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:45:54.421680 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:45:54.421686 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.421692 | orchestrator | 2026-02-04 00:45:54.421699 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-04 00:45:54.421708 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:03.254) 0:00:09.621 **** 2026-02-04 00:45:54.421717 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:45:54.421728 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:45:54.421738 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421748 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:45:54.421759 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:45:54.421768 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:45:54.421777 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:45:54.421787 | orchestrator | 2026-02-04 00:45:54.421798 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-04 00:45:54.421807 | orchestrator | Wednesday 04 February 2026 00:44:46 +0000 (0:00:02.684) 0:00:12.306 **** 2026-02-04 00:45:54.421813 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:45:54.421819 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:45:54.421825 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:45:54.421832 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421838 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:45:54.421846 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:45:54.421856 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:45:54.421866 | orchestrator | 2026-02-04 00:45:54.421876 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-04 00:45:54.421887 | orchestrator | Wednesday 04 February 2026 00:44:57 +0000 (0:00:10.566) 0:00:22.872 **** 2026-02-04 00:45:54.421900 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:45:54.421906 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:45:54.421912 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:45:54.421919 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:45:54.421925 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:45:54.421931 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:45:54.421937 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.421944 | orchestrator | 2026-02-04 00:45:54.421950 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-04 00:45:54.421956 | orchestrator | Wednesday 04 February 2026 00:45:31 +0000 (0:00:34.246) 0:00:57.119 **** 2026-02-04 00:45:54.421963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:45:54.421972 | orchestrator | 2026-02-04 00:45:54.421978 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-04 00:45:54.421984 | orchestrator | Wednesday 04 February 2026 00:45:32 +0000 (0:00:01.509) 0:00:58.629 **** 2026-02-04 00:45:54.421990 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-04 00:45:54.421997 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-04 00:45:54.422004 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-04 00:45:54.422010 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-04 00:45:54.422063 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-04 00:45:54.422075 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-04 00:45:54.422084 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-04 00:45:54.422094 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-04 00:45:54.422105 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-04 00:45:54.422116 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-04 00:45:54.422123 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-04 00:45:54.422129 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-04 00:45:54.422135 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-04 00:45:54.422141 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-04 00:45:54.422148 | orchestrator | 2026-02-04 00:45:54.422154 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-04 00:45:54.422165 | orchestrator | Wednesday 04 February 2026 00:45:39 +0000 (0:00:06.506) 0:01:05.135 **** 2026-02-04 00:45:54.422172 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.422179 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:45:54.422185 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:45:54.422191 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:45:54.422198 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:45:54.422204 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:45:54.422221 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:45:54.422230 | orchestrator | 2026-02-04 00:45:54.422250 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-04 00:45:54.422258 | orchestrator | Wednesday 04 February 2026 00:45:40 +0000 (0:00:01.061) 0:01:06.196 **** 2026-02-04 00:45:54.422272 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.422287 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:45:54.422297 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:45:54.422306 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:45:54.422316 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:45:54.422325 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:45:54.422335 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:45:54.422345 | orchestrator | 2026-02-04 00:45:54.422355 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-04 00:45:54.422373 | orchestrator | Wednesday 04 February 2026 00:45:41 +0000 (0:00:01.302) 0:01:07.498 **** 2026-02-04 00:45:54.422390 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:45:54.422399 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.422408 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:45:54.422416 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:45:54.422425 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:45:54.422435 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:45:54.422444 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:45:54.422453 | orchestrator | 2026-02-04 00:45:54.422463 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-04 00:45:54.422472 | orchestrator | Wednesday 04 February 2026 00:45:43 +0000 (0:00:01.775) 0:01:09.274 **** 2026-02-04 00:45:54.422481 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:45:54.422490 | orchestrator | ok: [testbed-manager] 2026-02-04 00:45:54.422499 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:45:54.422510 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:45:54.422519 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:45:54.422528 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:45:54.422570 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:45:54.422579 | orchestrator | 2026-02-04 00:45:54.422589 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-04 00:45:54.422599 | orchestrator | Wednesday 04 February 2026 00:45:45 +0000 (0:00:02.061) 0:01:11.336 **** 2026-02-04 00:45:54.422608 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-04 00:45:54.422620 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:45:54.422630 | orchestrator | 2026-02-04 00:45:54.422640 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-04 00:45:54.422651 | orchestrator | Wednesday 04 February 2026 00:45:46 +0000 (0:00:01.385) 0:01:12.722 **** 2026-02-04 00:45:54.422661 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.422671 | orchestrator | 2026-02-04 00:45:54.422681 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-04 00:45:54.422690 | orchestrator | Wednesday 04 February 2026 00:45:49 +0000 (0:00:02.160) 0:01:14.882 **** 2026-02-04 00:45:54.422700 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:45:54.422710 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:45:54.422720 | orchestrator | changed: [testbed-manager] 2026-02-04 00:45:54.422731 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:45:54.422738 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:45:54.422744 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:45:54.422750 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:45:54.422757 | orchestrator | 2026-02-04 00:45:54.422763 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:45:54.422770 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422777 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422784 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422790 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422797 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422803 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422809 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:45:54.422827 | orchestrator | 2026-02-04 00:45:54.422833 | orchestrator | 2026-02-04 00:45:54.422840 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:45:54.422848 | orchestrator | Wednesday 04 February 2026 00:45:51 +0000 (0:00:02.823) 0:01:17.706 **** 2026-02-04 00:45:54.422858 | orchestrator | =============================================================================== 2026-02-04 00:45:54.422867 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 34.25s 2026-02-04 00:45:54.422883 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.57s 2026-02-04 00:45:54.422894 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.51s 2026-02-04 00:45:54.422904 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.25s 2026-02-04 00:45:54.422915 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.82s 2026-02-04 00:45:54.422926 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.68s 2026-02-04 00:45:54.422937 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.68s 2026-02-04 00:45:54.422944 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.22s 2026-02-04 00:45:54.422955 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.16s 2026-02-04 00:45:54.422965 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.06s 2026-02-04 00:45:54.422975 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.78s 2026-02-04 00:45:54.422995 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.51s 2026-02-04 00:45:54.423003 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.39s 2026-02-04 00:45:54.423009 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.30s 2026-02-04 00:45:54.423015 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.08s 2026-02-04 00:45:54.423022 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.06s 2026-02-04 00:45:54.423028 | orchestrator | 2026-02-04 00:45:54 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:54.423035 | orchestrator | 2026-02-04 00:45:54 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:54.423041 | orchestrator | 2026-02-04 00:45:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:45:57.454257 | orchestrator | 2026-02-04 00:45:57 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:45:57.459160 | orchestrator | 2026-02-04 00:45:57 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:45:57.463656 | orchestrator | 2026-02-04 00:45:57 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:45:57.465053 | orchestrator | 2026-02-04 00:45:57 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:45:57.465108 | orchestrator | 2026-02-04 00:45:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:00.616973 | orchestrator | 2026-02-04 00:46:00 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:00.619762 | orchestrator | 2026-02-04 00:46:00 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:46:00.620840 | orchestrator | 2026-02-04 00:46:00 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:00.622699 | orchestrator | 2026-02-04 00:46:00 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:00.622743 | orchestrator | 2026-02-04 00:46:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:03.691162 | orchestrator | 2026-02-04 00:46:03 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:03.694192 | orchestrator | 2026-02-04 00:46:03 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:46:03.694965 | orchestrator | 2026-02-04 00:46:03 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:03.699461 | orchestrator | 2026-02-04 00:46:03 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:03.699511 | orchestrator | 2026-02-04 00:46:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:06.761486 | orchestrator | 2026-02-04 00:46:06 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:06.762299 | orchestrator | 2026-02-04 00:46:06 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state STARTED 2026-02-04 00:46:06.766826 | orchestrator | 2026-02-04 00:46:06 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:06.766865 | orchestrator | 2026-02-04 00:46:06 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:06.766883 | orchestrator | 2026-02-04 00:46:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:09.797840 | orchestrator | 2026-02-04 00:46:09 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:09.799281 | orchestrator | 2026-02-04 00:46:09 | INFO  | Task d6bbe116-965e-40f6-b650-58470e6fad38 is in state SUCCESS 2026-02-04 00:46:09.802175 | orchestrator | 2026-02-04 00:46:09 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:09.802933 | orchestrator | 2026-02-04 00:46:09 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:09.802955 | orchestrator | 2026-02-04 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:12.845161 | orchestrator | 2026-02-04 00:46:12 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:12.847148 | orchestrator | 2026-02-04 00:46:12 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:12.848400 | orchestrator | 2026-02-04 00:46:12 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:12.848448 | orchestrator | 2026-02-04 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:15.888274 | orchestrator | 2026-02-04 00:46:15 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:15.892222 | orchestrator | 2026-02-04 00:46:15 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:15.894793 | orchestrator | 2026-02-04 00:46:15 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:15.895223 | orchestrator | 2026-02-04 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:18.942979 | orchestrator | 2026-02-04 00:46:18 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:18.945157 | orchestrator | 2026-02-04 00:46:18 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:18.948983 | orchestrator | 2026-02-04 00:46:18 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:18.949079 | orchestrator | 2026-02-04 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:21.998222 | orchestrator | 2026-02-04 00:46:21 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:21.998482 | orchestrator | 2026-02-04 00:46:21 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:21.998851 | orchestrator | 2026-02-04 00:46:21 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:21.998929 | orchestrator | 2026-02-04 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:25.046295 | orchestrator | 2026-02-04 00:46:25 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:25.046370 | orchestrator | 2026-02-04 00:46:25 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:25.046850 | orchestrator | 2026-02-04 00:46:25 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:25.046890 | orchestrator | 2026-02-04 00:46:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:28.095164 | orchestrator | 2026-02-04 00:46:28 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state STARTED 2026-02-04 00:46:28.099103 | orchestrator | 2026-02-04 00:46:28 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:28.101711 | orchestrator | 2026-02-04 00:46:28 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:28.101771 | orchestrator | 2026-02-04 00:46:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:31.158304 | orchestrator | 2026-02-04 00:46:31 | INFO  | Task fa19e49f-5a3e-44b6-b470-63e3eede8929 is in state SUCCESS 2026-02-04 00:46:31.159878 | orchestrator | 2026-02-04 00:46:31.159944 | orchestrator | 2026-02-04 00:46:31.159957 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-04 00:46:31.159967 | orchestrator | 2026-02-04 00:46:31.159975 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-04 00:46:31.159984 | orchestrator | Wednesday 04 February 2026 00:44:51 +0000 (0:00:00.234) 0:00:00.234 **** 2026-02-04 00:46:31.159993 | orchestrator | ok: [testbed-manager] 2026-02-04 00:46:31.160002 | orchestrator | 2026-02-04 00:46:31.160011 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-04 00:46:31.160020 | orchestrator | Wednesday 04 February 2026 00:44:52 +0000 (0:00:01.368) 0:00:01.602 **** 2026-02-04 00:46:31.160049 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-04 00:46:31.160056 | orchestrator | 2026-02-04 00:46:31.160062 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-04 00:46:31.160067 | orchestrator | Wednesday 04 February 2026 00:44:53 +0000 (0:00:00.550) 0:00:02.152 **** 2026-02-04 00:46:31.160177 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.160184 | orchestrator | 2026-02-04 00:46:31.160190 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-04 00:46:31.160195 | orchestrator | Wednesday 04 February 2026 00:44:54 +0000 (0:00:01.393) 0:00:03.546 **** 2026-02-04 00:46:31.160209 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-04 00:46:31.160217 | orchestrator | ok: [testbed-manager] 2026-02-04 00:46:31.160226 | orchestrator | 2026-02-04 00:46:31.160235 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-04 00:46:31.160243 | orchestrator | Wednesday 04 February 2026 00:45:57 +0000 (0:01:02.322) 0:01:05.869 **** 2026-02-04 00:46:31.160252 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.160277 | orchestrator | 2026-02-04 00:46:31.160285 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:46:31.160290 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:31.160298 | orchestrator | 2026-02-04 00:46:31.160303 | orchestrator | 2026-02-04 00:46:31.160308 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:46:31.160330 | orchestrator | Wednesday 04 February 2026 00:46:06 +0000 (0:00:09.555) 0:01:15.424 **** 2026-02-04 00:46:31.160335 | orchestrator | =============================================================================== 2026-02-04 00:46:31.160341 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.32s 2026-02-04 00:46:31.160346 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.56s 2026-02-04 00:46:31.160351 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.39s 2026-02-04 00:46:31.160356 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.37s 2026-02-04 00:46:31.160362 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.55s 2026-02-04 00:46:31.160367 | orchestrator | 2026-02-04 00:46:31.160372 | orchestrator | 2026-02-04 00:46:31.160377 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-04 00:46:31.160382 | orchestrator | 2026-02-04 00:46:31.160387 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 00:46:31.160392 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.214) 0:00:00.214 **** 2026-02-04 00:46:31.160398 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:46:31.160406 | orchestrator | 2026-02-04 00:46:31.160412 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-04 00:46:31.160419 | orchestrator | Wednesday 04 February 2026 00:44:29 +0000 (0:00:01.405) 0:00:01.619 **** 2026-02-04 00:46:31.160425 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160431 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160437 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160443 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160449 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160455 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160461 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160468 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160473 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160481 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160487 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160493 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160499 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 00:46:31.160505 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160512 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160518 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160558 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160565 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 00:46:31.160570 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160576 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160581 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 00:46:31.160592 | orchestrator | 2026-02-04 00:46:31.160597 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 00:46:31.160603 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:03.910) 0:00:05.530 **** 2026-02-04 00:46:31.160608 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:46:31.160615 | orchestrator | 2026-02-04 00:46:31.160620 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-04 00:46:31.160637 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:01.362) 0:00:06.893 **** 2026-02-04 00:46:31.160646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160655 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160685 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160723 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160748 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.160753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160761 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160801 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.160838 | orchestrator | 2026-02-04 00:46:31.160844 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-04 00:46:31.160852 | orchestrator | Wednesday 04 February 2026 00:44:40 +0000 (0:00:05.602) 0:00:12.496 **** 2026-02-04 00:46:31.160858 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.160867 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160878 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:46:31.160884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.160890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160901 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:46:31.160906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.160926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160938 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:46:31.160946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.160952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.160968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160977 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:46:31.160986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.160991 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:31.160997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161016 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:31.161021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161027 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161042 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:31.161047 | orchestrator | 2026-02-04 00:46:31.161053 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-04 00:46:31.161058 | orchestrator | Wednesday 04 February 2026 00:44:41 +0000 (0:00:01.676) 0:00:14.172 **** 2026-02-04 00:46:31.161064 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161073 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161081 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161087 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:46:31.161093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161114 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:46:31.161119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161488 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:46:31.161498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161547 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:46:31.161556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161604 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:31.161616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 00:46:31.161624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.161651 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:31.161657 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:31.161662 | orchestrator | 2026-02-04 00:46:31.161668 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-04 00:46:31.161673 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:03.024) 0:00:17.196 **** 2026-02-04 00:46:31.161679 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:46:31.161684 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:46:31.161689 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:46:31.161695 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:46:31.161700 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:31.161705 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:31.161710 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:31.161715 | orchestrator | 2026-02-04 00:46:31.161721 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-04 00:46:31.161726 | orchestrator | Wednesday 04 February 2026 00:44:46 +0000 (0:00:01.263) 0:00:18.461 **** 2026-02-04 00:46:31.161731 | orchestrator | skipping: [testbed-manager] 2026-02-04 00:46:31.161737 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:46:31.161742 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:46:31.161747 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:46:31.161752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:46:31.161757 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:46:31.161763 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:46:31.161768 | orchestrator | 2026-02-04 00:46:31.161773 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-04 00:46:31.161779 | orchestrator | Wednesday 04 February 2026 00:44:48 +0000 (0:00:01.894) 0:00:20.355 **** 2026-02-04 00:46:31.161788 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161840 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.161863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161907 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161921 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.161964 | orchestrator | 2026-02-04 00:46:31.161983 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-04 00:46:31.161990 | orchestrator | Wednesday 04 February 2026 00:44:54 +0000 (0:00:05.996) 0:00:26.352 **** 2026-02-04 00:46:31.161998 | orchestrator | [WARNING]: Skipped 2026-02-04 00:46:31.162007 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-04 00:46:31.162066 | orchestrator | to this access issue: 2026-02-04 00:46:31.162076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-04 00:46:31.162086 | orchestrator | directory 2026-02-04 00:46:31.162105 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:46:31.162191 | orchestrator | 2026-02-04 00:46:31.162198 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-04 00:46:31.162204 | orchestrator | Wednesday 04 February 2026 00:44:55 +0000 (0:00:01.235) 0:00:27.587 **** 2026-02-04 00:46:31.162209 | orchestrator | [WARNING]: Skipped 2026-02-04 00:46:31.162215 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-04 00:46:31.162220 | orchestrator | to this access issue: 2026-02-04 00:46:31.162225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-04 00:46:31.162230 | orchestrator | directory 2026-02-04 00:46:31.162236 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:46:31.162241 | orchestrator | 2026-02-04 00:46:31.162246 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-04 00:46:31.162251 | orchestrator | Wednesday 04 February 2026 00:44:56 +0000 (0:00:00.781) 0:00:28.369 **** 2026-02-04 00:46:31.162257 | orchestrator | [WARNING]: Skipped 2026-02-04 00:46:31.162262 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-04 00:46:31.162267 | orchestrator | to this access issue: 2026-02-04 00:46:31.162272 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-04 00:46:31.162278 | orchestrator | directory 2026-02-04 00:46:31.162290 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:46:31.162295 | orchestrator | 2026-02-04 00:46:31.162307 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-04 00:46:31.162313 | orchestrator | Wednesday 04 February 2026 00:44:56 +0000 (0:00:00.617) 0:00:28.986 **** 2026-02-04 00:46:31.162318 | orchestrator | [WARNING]: Skipped 2026-02-04 00:46:31.162323 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-04 00:46:31.162328 | orchestrator | to this access issue: 2026-02-04 00:46:31.162334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-04 00:46:31.162339 | orchestrator | directory 2026-02-04 00:46:31.162344 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 00:46:31.162350 | orchestrator | 2026-02-04 00:46:31.162355 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-04 00:46:31.162360 | orchestrator | Wednesday 04 February 2026 00:44:57 +0000 (0:00:00.791) 0:00:29.778 **** 2026-02-04 00:46:31.162405 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.162411 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.162416 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.162421 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.162427 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.162432 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.162437 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.162442 | orchestrator | 2026-02-04 00:46:31.162448 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-04 00:46:31.162457 | orchestrator | Wednesday 04 February 2026 00:45:01 +0000 (0:00:04.046) 0:00:33.824 **** 2026-02-04 00:46:31.162463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162474 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162479 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162485 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162490 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162495 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 00:46:31.162500 | orchestrator | 2026-02-04 00:46:31.162506 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-04 00:46:31.162511 | orchestrator | Wednesday 04 February 2026 00:45:04 +0000 (0:00:03.480) 0:00:37.305 **** 2026-02-04 00:46:31.162516 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.162521 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.162604 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.162610 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.162615 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.162620 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.162626 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.162631 | orchestrator | 2026-02-04 00:46:31.162636 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-04 00:46:31.162642 | orchestrator | Wednesday 04 February 2026 00:45:08 +0000 (0:00:03.074) 0:00:40.380 **** 2026-02-04 00:46:31.162648 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162672 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162686 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162700 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162719 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162737 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162761 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162776 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162799 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162809 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162826 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162831 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:46:31.162848 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162854 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162859 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.162864 | orchestrator | 2026-02-04 00:46:31.162870 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-04 00:46:31.162875 | orchestrator | Wednesday 04 February 2026 00:45:10 +0000 (0:00:02.365) 0:00:42.746 **** 2026-02-04 00:46:31.162880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162886 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162891 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162896 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162901 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162907 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162915 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 00:46:31.162921 | orchestrator | 2026-02-04 00:46:31.162926 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-04 00:46:31.162931 | orchestrator | Wednesday 04 February 2026 00:45:13 +0000 (0:00:02.638) 0:00:45.384 **** 2026-02-04 00:46:31.162936 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162941 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162946 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162951 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162956 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162961 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162967 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 00:46:31.162972 | orchestrator | 2026-02-04 00:46:31.162981 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-04 00:46:31.162986 | orchestrator | Wednesday 04 February 2026 00:45:15 +0000 (0:00:02.216) 0:00:47.601 **** 2026-02-04 00:46:31.162992 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.162997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.163007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.163015 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.163033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.163038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.163043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 00:46:31.163057 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:46:31.163164 | orchestrator | 2026-02-04 00:46:31.163169 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-04 00:46:31.163174 | orchestrator | Wednesday 04 February 2026 00:45:18 +0000 (0:00:02.979) 0:00:50.581 **** 2026-02-04 00:46:31.163179 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.163184 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.163189 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.163194 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.163199 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.163203 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.163208 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.163213 | orchestrator | 2026-02-04 00:46:31.163218 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-04 00:46:31.163223 | orchestrator | Wednesday 04 February 2026 00:45:19 +0000 (0:00:01.461) 0:00:52.043 **** 2026-02-04 00:46:31.163228 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.163233 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.163238 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.163243 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.163248 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.163252 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.163257 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.163262 | orchestrator | 2026-02-04 00:46:31.163267 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163272 | orchestrator | Wednesday 04 February 2026 00:45:20 +0000 (0:00:01.041) 0:00:53.084 **** 2026-02-04 00:46:31.163277 | orchestrator | 2026-02-04 00:46:31.163282 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163287 | orchestrator | Wednesday 04 February 2026 00:45:20 +0000 (0:00:00.062) 0:00:53.146 **** 2026-02-04 00:46:31.163292 | orchestrator | 2026-02-04 00:46:31.163297 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163302 | orchestrator | Wednesday 04 February 2026 00:45:20 +0000 (0:00:00.059) 0:00:53.206 **** 2026-02-04 00:46:31.163306 | orchestrator | 2026-02-04 00:46:31.163354 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163360 | orchestrator | Wednesday 04 February 2026 00:45:21 +0000 (0:00:00.178) 0:00:53.384 **** 2026-02-04 00:46:31.163364 | orchestrator | 2026-02-04 00:46:31.163369 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163374 | orchestrator | Wednesday 04 February 2026 00:45:21 +0000 (0:00:00.059) 0:00:53.444 **** 2026-02-04 00:46:31.163379 | orchestrator | 2026-02-04 00:46:31.163384 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163389 | orchestrator | Wednesday 04 February 2026 00:45:21 +0000 (0:00:00.058) 0:00:53.503 **** 2026-02-04 00:46:31.163394 | orchestrator | 2026-02-04 00:46:31.163398 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 00:46:31.163409 | orchestrator | Wednesday 04 February 2026 00:45:21 +0000 (0:00:00.059) 0:00:53.563 **** 2026-02-04 00:46:31.163414 | orchestrator | 2026-02-04 00:46:31.163419 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-04 00:46:31.163424 | orchestrator | Wednesday 04 February 2026 00:45:21 +0000 (0:00:00.079) 0:00:53.642 **** 2026-02-04 00:46:31.163433 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.163438 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.163443 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.163448 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.163452 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.163457 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.163462 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.163467 | orchestrator | 2026-02-04 00:46:31.163472 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-04 00:46:31.163477 | orchestrator | Wednesday 04 February 2026 00:45:51 +0000 (0:00:29.728) 0:01:23.371 **** 2026-02-04 00:46:31.163482 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.163488 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.163496 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.163503 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.163511 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.163518 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.163545 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.163553 | orchestrator | 2026-02-04 00:46:31.163560 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-04 00:46:31.163567 | orchestrator | Wednesday 04 February 2026 00:46:19 +0000 (0:00:28.571) 0:01:51.942 **** 2026-02-04 00:46:31.163574 | orchestrator | ok: [testbed-manager] 2026-02-04 00:46:31.163582 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:46:31.163589 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:46:31.163596 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:46:31.163607 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:46:31.163615 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:46:31.163623 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:46:31.163631 | orchestrator | 2026-02-04 00:46:31.163640 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-04 00:46:31.163648 | orchestrator | Wednesday 04 February 2026 00:46:21 +0000 (0:00:02.244) 0:01:54.187 **** 2026-02-04 00:46:31.163657 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:31.163665 | orchestrator | changed: [testbed-manager] 2026-02-04 00:46:31.163673 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:31.163680 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:46:31.163687 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:46:31.163695 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:46:31.163703 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:31.163710 | orchestrator | 2026-02-04 00:46:31.163718 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:46:31.163726 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163734 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163744 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163753 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163760 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163775 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163784 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 00:46:31.163792 | orchestrator | 2026-02-04 00:46:31.163800 | orchestrator | 2026-02-04 00:46:31.163809 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:46:31.163817 | orchestrator | Wednesday 04 February 2026 00:46:29 +0000 (0:00:08.051) 0:02:02.238 **** 2026-02-04 00:46:31.163825 | orchestrator | =============================================================================== 2026-02-04 00:46:31.163833 | orchestrator | common : Restart fluentd container ------------------------------------- 29.73s 2026-02-04 00:46:31.163842 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.57s 2026-02-04 00:46:31.163852 | orchestrator | common : Restart cron container ----------------------------------------- 8.05s 2026-02-04 00:46:31.163857 | orchestrator | common : Copying over config.json files for services -------------------- 6.00s 2026-02-04 00:46:31.163862 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.60s 2026-02-04 00:46:31.163867 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.05s 2026-02-04 00:46:31.163874 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.91s 2026-02-04 00:46:31.163881 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.48s 2026-02-04 00:46:31.163889 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.07s 2026-02-04 00:46:31.163897 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.02s 2026-02-04 00:46:31.163905 | orchestrator | common : Check common containers ---------------------------------------- 2.98s 2026-02-04 00:46:31.163912 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.64s 2026-02-04 00:46:31.163919 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.37s 2026-02-04 00:46:31.163926 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.24s 2026-02-04 00:46:31.163939 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.22s 2026-02-04 00:46:31.163946 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.89s 2026-02-04 00:46:31.163954 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.68s 2026-02-04 00:46:31.163961 | orchestrator | common : Creating log volume -------------------------------------------- 1.46s 2026-02-04 00:46:31.163968 | orchestrator | common : include_tasks -------------------------------------------------- 1.41s 2026-02-04 00:46:31.163975 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2026-02-04 00:46:31.163982 | orchestrator | 2026-02-04 00:46:31 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:31.164078 | orchestrator | 2026-02-04 00:46:31 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:31.164089 | orchestrator | 2026-02-04 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:34.194317 | orchestrator | 2026-02-04 00:46:34 | INFO  | Task ed11804f-9888-415a-8dbd-624342269939 is in state STARTED 2026-02-04 00:46:34.195681 | orchestrator | 2026-02-04 00:46:34 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:34.196546 | orchestrator | 2026-02-04 00:46:34 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:34.197602 | orchestrator | 2026-02-04 00:46:34 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:34.198674 | orchestrator | 2026-02-04 00:46:34 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:34.199803 | orchestrator | 2026-02-04 00:46:34 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:34.199840 | orchestrator | 2026-02-04 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:37.216770 | orchestrator | 2026-02-04 00:46:37 | INFO  | Task ed11804f-9888-415a-8dbd-624342269939 is in state STARTED 2026-02-04 00:46:37.216884 | orchestrator | 2026-02-04 00:46:37 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:37.217327 | orchestrator | 2026-02-04 00:46:37 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:37.217944 | orchestrator | 2026-02-04 00:46:37 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:37.218610 | orchestrator | 2026-02-04 00:46:37 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:37.219290 | orchestrator | 2026-02-04 00:46:37 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:37.219322 | orchestrator | 2026-02-04 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:40.242736 | orchestrator | 2026-02-04 00:46:40 | INFO  | Task ed11804f-9888-415a-8dbd-624342269939 is in state STARTED 2026-02-04 00:46:40.243675 | orchestrator | 2026-02-04 00:46:40 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:40.244176 | orchestrator | 2026-02-04 00:46:40 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:40.244766 | orchestrator | 2026-02-04 00:46:40 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:40.246497 | orchestrator | 2026-02-04 00:46:40 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:40.247103 | orchestrator | 2026-02-04 00:46:40 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:40.247125 | orchestrator | 2026-02-04 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:43.273505 | orchestrator | 2026-02-04 00:46:43 | INFO  | Task ed11804f-9888-415a-8dbd-624342269939 is in state STARTED 2026-02-04 00:46:43.276128 | orchestrator | 2026-02-04 00:46:43 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:43.279038 | orchestrator | 2026-02-04 00:46:43 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:43.279978 | orchestrator | 2026-02-04 00:46:43 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:43.281425 | orchestrator | 2026-02-04 00:46:43 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:43.284502 | orchestrator | 2026-02-04 00:46:43 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:43.284662 | orchestrator | 2026-02-04 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:46.333051 | orchestrator | 2026-02-04 00:46:46 | INFO  | Task ed11804f-9888-415a-8dbd-624342269939 is in state STARTED 2026-02-04 00:46:46.334154 | orchestrator | 2026-02-04 00:46:46 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:46.335079 | orchestrator | 2026-02-04 00:46:46 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:46.336272 | orchestrator | 2026-02-04 00:46:46 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:46.337501 | orchestrator | 2026-02-04 00:46:46 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:46.338931 | orchestrator | 2026-02-04 00:46:46 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:46.338994 | orchestrator | 2026-02-04 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:49.360214 | orchestrator | 2026-02-04 00:46:49 | INFO  | Task ed11804f-9888-415a-8dbd-624342269939 is in state SUCCESS 2026-02-04 00:46:49.360655 | orchestrator | 2026-02-04 00:46:49 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:49.362478 | orchestrator | 2026-02-04 00:46:49 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:49.362767 | orchestrator | 2026-02-04 00:46:49 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:49.368519 | orchestrator | 2026-02-04 00:46:49 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:49.368621 | orchestrator | 2026-02-04 00:46:49 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:49.368630 | orchestrator | 2026-02-04 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:52.396894 | orchestrator | 2026-02-04 00:46:52 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:46:52.396970 | orchestrator | 2026-02-04 00:46:52 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:52.400308 | orchestrator | 2026-02-04 00:46:52 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:52.400595 | orchestrator | 2026-02-04 00:46:52 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state STARTED 2026-02-04 00:46:52.401313 | orchestrator | 2026-02-04 00:46:52 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:52.401821 | orchestrator | 2026-02-04 00:46:52 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:52.401952 | orchestrator | 2026-02-04 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:55.428720 | orchestrator | 2026-02-04 00:46:55 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:46:55.428874 | orchestrator | 2026-02-04 00:46:55 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:55.429213 | orchestrator | 2026-02-04 00:46:55 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:55.431480 | orchestrator | 2026-02-04 00:46:55 | INFO  | Task 7e0b55d6-1c20-45f8-a3a5-411a8fff4411 is in state SUCCESS 2026-02-04 00:46:55.432292 | orchestrator | 2026-02-04 00:46:55 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:55.433757 | orchestrator | 2026-02-04 00:46:55.433792 | orchestrator | 2026-02-04 00:46:55.433801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:46:55.433809 | orchestrator | 2026-02-04 00:46:55.433816 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:46:55.433824 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.247) 0:00:00.247 **** 2026-02-04 00:46:55.433832 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:46:55.433841 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:46:55.433849 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:46:55.433856 | orchestrator | 2026-02-04 00:46:55.433864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:46:55.433872 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.317) 0:00:00.564 **** 2026-02-04 00:46:55.433880 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-04 00:46:55.433888 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-04 00:46:55.433915 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-04 00:46:55.433925 | orchestrator | 2026-02-04 00:46:55.433934 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-04 00:46:55.433943 | orchestrator | 2026-02-04 00:46:55.433952 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-04 00:46:55.433961 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.460) 0:00:01.024 **** 2026-02-04 00:46:55.433969 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:46:55.433979 | orchestrator | 2026-02-04 00:46:55.433988 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-04 00:46:55.433997 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.541) 0:00:01.566 **** 2026-02-04 00:46:55.434006 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-04 00:46:55.434015 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-04 00:46:55.434078 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-04 00:46:55.434087 | orchestrator | 2026-02-04 00:46:55.434096 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-04 00:46:55.434105 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.666) 0:00:02.232 **** 2026-02-04 00:46:55.434114 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-04 00:46:55.434126 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-04 00:46:55.434140 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-04 00:46:55.434154 | orchestrator | 2026-02-04 00:46:55.434169 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-04 00:46:55.434183 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:01.930) 0:00:04.162 **** 2026-02-04 00:46:55.434197 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:55.434212 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:55.434226 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:55.434240 | orchestrator | 2026-02-04 00:46:55.434264 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-04 00:46:55.434279 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:01.767) 0:00:05.929 **** 2026-02-04 00:46:55.434294 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:55.434309 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:55.434324 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:55.434335 | orchestrator | 2026-02-04 00:46:55.434344 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:46:55.434353 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:55.434363 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:55.434372 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:55.434386 | orchestrator | 2026-02-04 00:46:55.434399 | orchestrator | 2026-02-04 00:46:55.434414 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:46:55.434428 | orchestrator | Wednesday 04 February 2026 00:46:48 +0000 (0:00:07.624) 0:00:13.554 **** 2026-02-04 00:46:55.434442 | orchestrator | =============================================================================== 2026-02-04 00:46:55.434457 | orchestrator | memcached : Restart memcached container --------------------------------- 7.62s 2026-02-04 00:46:55.434472 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.93s 2026-02-04 00:46:55.434486 | orchestrator | memcached : Check memcached container ----------------------------------- 1.77s 2026-02-04 00:46:55.434500 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.67s 2026-02-04 00:46:55.434509 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.54s 2026-02-04 00:46:55.434558 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-02-04 00:46:55.434568 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-04 00:46:55.434577 | orchestrator | 2026-02-04 00:46:55.434586 | orchestrator | 2026-02-04 00:46:55.434595 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:46:55.434604 | orchestrator | 2026-02-04 00:46:55.434612 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:46:55.434621 | orchestrator | Wednesday 04 February 2026 00:46:34 +0000 (0:00:00.243) 0:00:00.243 **** 2026-02-04 00:46:55.434630 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:46:55.434639 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:46:55.434648 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:46:55.434657 | orchestrator | 2026-02-04 00:46:55.434666 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:46:55.434689 | orchestrator | Wednesday 04 February 2026 00:46:34 +0000 (0:00:00.261) 0:00:00.505 **** 2026-02-04 00:46:55.434698 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-04 00:46:55.434707 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-04 00:46:55.434716 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-04 00:46:55.434725 | orchestrator | 2026-02-04 00:46:55.434734 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-04 00:46:55.434743 | orchestrator | 2026-02-04 00:46:55.434752 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-04 00:46:55.434761 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.386) 0:00:00.892 **** 2026-02-04 00:46:55.434769 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:46:55.434778 | orchestrator | 2026-02-04 00:46:55.434787 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-04 00:46:55.434796 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.566) 0:00:01.459 **** 2026-02-04 00:46:55.434808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434948 | orchestrator | 2026-02-04 00:46:55.434957 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-04 00:46:55.434966 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:01.139) 0:00:02.598 **** 2026-02-04 00:46:55.434975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.434995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435058 | orchestrator | 2026-02-04 00:46:55.435067 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-04 00:46:55.435076 | orchestrator | Wednesday 04 February 2026 00:46:39 +0000 (0:00:03.036) 0:00:05.635 **** 2026-02-04 00:46:55.435085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435157 | orchestrator | 2026-02-04 00:46:55.435169 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-04 00:46:55.435184 | orchestrator | Wednesday 04 February 2026 00:46:42 +0000 (0:00:02.485) 0:00:08.121 **** 2026-02-04 00:46:55.435194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 00:46:55.435265 | orchestrator | 2026-02-04 00:46:55.435274 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 00:46:55.435283 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:01.600) 0:00:09.722 **** 2026-02-04 00:46:55.435292 | orchestrator | 2026-02-04 00:46:55.435301 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 00:46:55.435310 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.141) 0:00:09.863 **** 2026-02-04 00:46:55.435319 | orchestrator | 2026-02-04 00:46:55.435328 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 00:46:55.435337 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.063) 0:00:09.927 **** 2026-02-04 00:46:55.435345 | orchestrator | 2026-02-04 00:46:55.435354 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-04 00:46:55.435368 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.151) 0:00:10.078 **** 2026-02-04 00:46:55.435383 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:55.435398 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:55.435413 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:55.435427 | orchestrator | 2026-02-04 00:46:55.435441 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-04 00:46:55.435456 | orchestrator | Wednesday 04 February 2026 00:46:48 +0000 (0:00:04.571) 0:00:14.650 **** 2026-02-04 00:46:55.435471 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:46:55.435486 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:46:55.435502 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:46:55.435511 | orchestrator | 2026-02-04 00:46:55.435570 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:46:55.435582 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:55.435595 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:55.435611 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:46:55.435626 | orchestrator | 2026-02-04 00:46:55.435641 | orchestrator | 2026-02-04 00:46:55.435657 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:46:55.435673 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:03.598) 0:00:18.248 **** 2026-02-04 00:46:55.435688 | orchestrator | =============================================================================== 2026-02-04 00:46:55.435703 | orchestrator | redis : Restart redis container ----------------------------------------- 4.57s 2026-02-04 00:46:55.435717 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.60s 2026-02-04 00:46:55.435727 | orchestrator | redis : Copying over default config.json files -------------------------- 3.04s 2026-02-04 00:46:55.435735 | orchestrator | redis : Copying over redis config files --------------------------------- 2.49s 2026-02-04 00:46:55.435744 | orchestrator | redis : Check redis containers ------------------------------------------ 1.60s 2026-02-04 00:46:55.435753 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.14s 2026-02-04 00:46:55.435762 | orchestrator | redis : include_tasks --------------------------------------------------- 0.57s 2026-02-04 00:46:55.435770 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2026-02-04 00:46:55.435779 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.36s 2026-02-04 00:46:55.435788 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2026-02-04 00:46:55.435895 | orchestrator | 2026-02-04 00:46:55 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:55.435907 | orchestrator | 2026-02-04 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:46:58.467073 | orchestrator | 2026-02-04 00:46:58 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:46:58.467172 | orchestrator | 2026-02-04 00:46:58 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:46:58.467654 | orchestrator | 2026-02-04 00:46:58 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:46:58.468439 | orchestrator | 2026-02-04 00:46:58 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:46:58.469071 | orchestrator | 2026-02-04 00:46:58 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:46:58.469124 | orchestrator | 2026-02-04 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:01.493762 | orchestrator | 2026-02-04 00:47:01 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:01.495250 | orchestrator | 2026-02-04 00:47:01 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:01.497085 | orchestrator | 2026-02-04 00:47:01 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:01.498500 | orchestrator | 2026-02-04 00:47:01 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:01.500179 | orchestrator | 2026-02-04 00:47:01 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:01.500246 | orchestrator | 2026-02-04 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:04.524982 | orchestrator | 2026-02-04 00:47:04 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:04.525185 | orchestrator | 2026-02-04 00:47:04 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:04.525694 | orchestrator | 2026-02-04 00:47:04 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:04.526601 | orchestrator | 2026-02-04 00:47:04 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:04.527233 | orchestrator | 2026-02-04 00:47:04 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:04.527272 | orchestrator | 2026-02-04 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:07.607287 | orchestrator | 2026-02-04 00:47:07 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:07.607366 | orchestrator | 2026-02-04 00:47:07 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:07.607376 | orchestrator | 2026-02-04 00:47:07 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:07.607384 | orchestrator | 2026-02-04 00:47:07 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:07.607391 | orchestrator | 2026-02-04 00:47:07 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:07.607399 | orchestrator | 2026-02-04 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:10.628826 | orchestrator | 2026-02-04 00:47:10 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:10.629984 | orchestrator | 2026-02-04 00:47:10 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:10.634122 | orchestrator | 2026-02-04 00:47:10 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:10.635082 | orchestrator | 2026-02-04 00:47:10 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:10.636550 | orchestrator | 2026-02-04 00:47:10 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:10.636672 | orchestrator | 2026-02-04 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:13.673972 | orchestrator | 2026-02-04 00:47:13 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:13.674196 | orchestrator | 2026-02-04 00:47:13 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:13.674227 | orchestrator | 2026-02-04 00:47:13 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:13.674241 | orchestrator | 2026-02-04 00:47:13 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:13.675672 | orchestrator | 2026-02-04 00:47:13 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:13.675734 | orchestrator | 2026-02-04 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:16.757923 | orchestrator | 2026-02-04 00:47:16 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:16.758061 | orchestrator | 2026-02-04 00:47:16 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:16.758078 | orchestrator | 2026-02-04 00:47:16 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:16.758109 | orchestrator | 2026-02-04 00:47:16 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:16.758117 | orchestrator | 2026-02-04 00:47:16 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:16.758124 | orchestrator | 2026-02-04 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:19.752133 | orchestrator | 2026-02-04 00:47:19 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:19.752321 | orchestrator | 2026-02-04 00:47:19 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:19.753251 | orchestrator | 2026-02-04 00:47:19 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:19.754580 | orchestrator | 2026-02-04 00:47:19 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:19.755063 | orchestrator | 2026-02-04 00:47:19 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:19.755235 | orchestrator | 2026-02-04 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:22.779403 | orchestrator | 2026-02-04 00:47:22 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:22.781003 | orchestrator | 2026-02-04 00:47:22 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:22.782751 | orchestrator | 2026-02-04 00:47:22 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:22.784490 | orchestrator | 2026-02-04 00:47:22 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:22.786149 | orchestrator | 2026-02-04 00:47:22 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:22.786191 | orchestrator | 2026-02-04 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:25.828611 | orchestrator | 2026-02-04 00:47:25 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:25.829602 | orchestrator | 2026-02-04 00:47:25 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:25.831295 | orchestrator | 2026-02-04 00:47:25 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:25.831897 | orchestrator | 2026-02-04 00:47:25 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:25.832918 | orchestrator | 2026-02-04 00:47:25 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:25.832953 | orchestrator | 2026-02-04 00:47:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:29.264292 | orchestrator | 2026-02-04 00:47:29 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:29.268851 | orchestrator | 2026-02-04 00:47:29 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:29.270750 | orchestrator | 2026-02-04 00:47:29 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:29.272272 | orchestrator | 2026-02-04 00:47:29 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:29.275349 | orchestrator | 2026-02-04 00:47:29 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:29.275452 | orchestrator | 2026-02-04 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:32.333325 | orchestrator | 2026-02-04 00:47:32 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:32.333587 | orchestrator | 2026-02-04 00:47:32 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state STARTED 2026-02-04 00:47:32.334249 | orchestrator | 2026-02-04 00:47:32 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:32.335628 | orchestrator | 2026-02-04 00:47:32 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:32.335673 | orchestrator | 2026-02-04 00:47:32 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:32.335682 | orchestrator | 2026-02-04 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:35.360030 | orchestrator | 2026-02-04 00:47:35 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:35.360681 | orchestrator | 2026-02-04 00:47:35 | INFO  | Task afe6838c-d7b6-43fa-bb57-e1e8c788d643 is in state SUCCESS 2026-02-04 00:47:35.361994 | orchestrator | 2026-02-04 00:47:35.362093 | orchestrator | 2026-02-04 00:47:35.362114 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:47:35.362164 | orchestrator | 2026-02-04 00:47:35.362182 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:47:35.362200 | orchestrator | Wednesday 04 February 2026 00:46:34 +0000 (0:00:00.313) 0:00:00.313 **** 2026-02-04 00:47:35.362219 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:47:35.362238 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:47:35.362256 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:47:35.362273 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:47:35.362291 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:47:35.362307 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:35.362317 | orchestrator | 2026-02-04 00:47:35.362328 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:47:35.362343 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.716) 0:00:01.030 **** 2026-02-04 00:47:35.362359 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:47:35.362374 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:47:35.362390 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:47:35.362406 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:47:35.362421 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:47:35.362438 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 00:47:35.362454 | orchestrator | 2026-02-04 00:47:35.362472 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-04 00:47:35.362489 | orchestrator | 2026-02-04 00:47:35.362506 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-04 00:47:35.362552 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.676) 0:00:01.706 **** 2026-02-04 00:47:35.362565 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:47:35.362576 | orchestrator | 2026-02-04 00:47:35.362589 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 00:47:35.362600 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:01.118) 0:00:02.824 **** 2026-02-04 00:47:35.362612 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-04 00:47:35.362624 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-04 00:47:35.362636 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-04 00:47:35.362647 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-04 00:47:35.362660 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-04 00:47:35.362671 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-04 00:47:35.362683 | orchestrator | 2026-02-04 00:47:35.362718 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 00:47:35.362729 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:01.295) 0:00:04.120 **** 2026-02-04 00:47:35.362741 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-04 00:47:35.362752 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-04 00:47:35.362763 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-04 00:47:35.362775 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-04 00:47:35.362786 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-04 00:47:35.362797 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-04 00:47:35.362808 | orchestrator | 2026-02-04 00:47:35.362820 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 00:47:35.362831 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:01.865) 0:00:05.985 **** 2026-02-04 00:47:35.362842 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-04 00:47:35.362866 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:47:35.362879 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-04 00:47:35.362890 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:47:35.362901 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-04 00:47:35.362913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:47:35.362925 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-04 00:47:35.362936 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:35.362947 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-04 00:47:35.362958 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:35.362970 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-04 00:47:35.362980 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:35.362990 | orchestrator | 2026-02-04 00:47:35.362999 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-04 00:47:35.363009 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:01.246) 0:00:07.232 **** 2026-02-04 00:47:35.363019 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:47:35.363029 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:47:35.363038 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:47:35.363048 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:35.363058 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:35.363067 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:35.363077 | orchestrator | 2026-02-04 00:47:35.363086 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-04 00:47:35.363096 | orchestrator | Wednesday 04 February 2026 00:46:42 +0000 (0:00:00.625) 0:00:07.858 **** 2026-02-04 00:47:35.363128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363249 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363302 | orchestrator | 2026-02-04 00:47:35.363312 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-04 00:47:35.363323 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:01.565) 0:00:09.423 **** 2026-02-04 00:47:35.363333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363570 | orchestrator | 2026-02-04 00:47:35.363584 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-04 00:47:35.363594 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:03.939) 0:00:13.363 **** 2026-02-04 00:47:35.363604 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:47:35.363614 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:47:35.363623 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:47:35.363633 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:35.363643 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:35.363653 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:35.363662 | orchestrator | 2026-02-04 00:47:35.363672 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-04 00:47:35.363682 | orchestrator | Wednesday 04 February 2026 00:46:49 +0000 (0:00:01.114) 0:00:14.478 **** 2026-02-04 00:47:35.363711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363819 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363883 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 00:47:35.363893 | orchestrator | 2026-02-04 00:47:35.363903 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:47:35.363913 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:03.285) 0:00:17.763 **** 2026-02-04 00:47:35.363923 | orchestrator | 2026-02-04 00:47:35.363933 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:47:35.363943 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.394) 0:00:18.158 **** 2026-02-04 00:47:35.363953 | orchestrator | 2026-02-04 00:47:35.363963 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:47:35.363972 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.131) 0:00:18.290 **** 2026-02-04 00:47:35.363982 | orchestrator | 2026-02-04 00:47:35.363992 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:47:35.364002 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.134) 0:00:18.424 **** 2026-02-04 00:47:35.364011 | orchestrator | 2026-02-04 00:47:35.364021 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:47:35.364031 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.194) 0:00:18.619 **** 2026-02-04 00:47:35.364041 | orchestrator | 2026-02-04 00:47:35.364051 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 00:47:35.364061 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.251) 0:00:18.871 **** 2026-02-04 00:47:35.364070 | orchestrator | 2026-02-04 00:47:35.364085 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-04 00:47:35.364095 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.307) 0:00:19.179 **** 2026-02-04 00:47:35.364105 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:47:35.364115 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:47:35.364132 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:47:35.364142 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:47:35.364152 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:47:35.364161 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:47:35.364171 | orchestrator | 2026-02-04 00:47:35.364181 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-04 00:47:35.364191 | orchestrator | Wednesday 04 February 2026 00:47:03 +0000 (0:00:10.132) 0:00:29.311 **** 2026-02-04 00:47:35.364201 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:47:35.364211 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:47:35.364221 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:47:35.364231 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:47:35.364240 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:47:35.364250 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:47:35.364260 | orchestrator | 2026-02-04 00:47:35.364270 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 00:47:35.364280 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:01.227) 0:00:30.539 **** 2026-02-04 00:47:35.364290 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:47:35.364299 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:47:35.364309 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:47:35.364319 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:47:35.364329 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:47:35.364339 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:47:35.364348 | orchestrator | 2026-02-04 00:47:35.364358 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-04 00:47:35.364368 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:05.666) 0:00:36.205 **** 2026-02-04 00:47:35.364383 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-04 00:47:35.364394 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-04 00:47:35.364405 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-04 00:47:35.364415 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-04 00:47:35.364425 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-04 00:47:35.364435 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-04 00:47:35.364458 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-04 00:47:35.364468 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-04 00:47:35.364478 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-04 00:47:35.364488 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-04 00:47:35.364498 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-04 00:47:35.364508 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-04 00:47:35.364584 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:47:35.364601 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:47:35.364619 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:47:35.364635 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:47:35.364660 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:47:35.364675 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 00:47:35.364692 | orchestrator | 2026-02-04 00:47:35.364709 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-04 00:47:35.364726 | orchestrator | Wednesday 04 February 2026 00:47:18 +0000 (0:00:08.101) 0:00:44.306 **** 2026-02-04 00:47:35.364736 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-04 00:47:35.364746 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:35.364756 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-04 00:47:35.364766 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:35.364776 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-04 00:47:35.364785 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:35.364795 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-04 00:47:35.364805 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-04 00:47:35.364814 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-04 00:47:35.364824 | orchestrator | 2026-02-04 00:47:35.364834 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-04 00:47:35.364849 | orchestrator | Wednesday 04 February 2026 00:47:21 +0000 (0:00:02.399) 0:00:46.706 **** 2026-02-04 00:47:35.364859 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-04 00:47:35.364869 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-04 00:47:35.364878 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:47:35.364888 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:47:35.364898 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-04 00:47:35.364907 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:47:35.364917 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-04 00:47:35.364927 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-04 00:47:35.364936 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-04 00:47:35.364946 | orchestrator | 2026-02-04 00:47:35.364955 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 00:47:35.364965 | orchestrator | Wednesday 04 February 2026 00:47:25 +0000 (0:00:03.824) 0:00:50.530 **** 2026-02-04 00:47:35.364974 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:47:35.364984 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:47:35.364994 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:47:35.365003 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:47:35.365013 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:47:35.365022 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:47:35.365032 | orchestrator | 2026-02-04 00:47:35.365042 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:47:35.365052 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:47:35.365070 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:47:35.365081 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:47:35.365091 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:47:35.365101 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:47:35.365120 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:47:35.365128 | orchestrator | 2026-02-04 00:47:35.365136 | orchestrator | 2026-02-04 00:47:35.365144 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:47:35.365152 | orchestrator | Wednesday 04 February 2026 00:47:33 +0000 (0:00:07.996) 0:00:58.529 **** 2026-02-04 00:47:35.365160 | orchestrator | =============================================================================== 2026-02-04 00:47:35.365168 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.67s 2026-02-04 00:47:35.365176 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.13s 2026-02-04 00:47:35.365184 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.10s 2026-02-04 00:47:35.365202 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.94s 2026-02-04 00:47:35.365211 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.82s 2026-02-04 00:47:35.365218 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.29s 2026-02-04 00:47:35.365226 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.40s 2026-02-04 00:47:35.365234 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.87s 2026-02-04 00:47:35.365242 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.57s 2026-02-04 00:47:35.365250 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.42s 2026-02-04 00:47:35.365257 | orchestrator | module-load : Load modules ---------------------------------------------- 1.30s 2026-02-04 00:47:35.365265 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.25s 2026-02-04 00:47:35.365273 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.23s 2026-02-04 00:47:35.365281 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.12s 2026-02-04 00:47:35.365295 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.11s 2026-02-04 00:47:35.365303 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2026-02-04 00:47:35.365311 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-02-04 00:47:35.365319 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.63s 2026-02-04 00:47:35.365471 | orchestrator | 2026-02-04 00:47:35 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:35.365484 | orchestrator | 2026-02-04 00:47:35 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:35.365492 | orchestrator | 2026-02-04 00:47:35 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:35.365505 | orchestrator | 2026-02-04 00:47:35 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:35.365536 | orchestrator | 2026-02-04 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:38.405491 | orchestrator | 2026-02-04 00:47:38 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:38.405615 | orchestrator | 2026-02-04 00:47:38 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:38.405629 | orchestrator | 2026-02-04 00:47:38 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:38.405638 | orchestrator | 2026-02-04 00:47:38 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:38.405645 | orchestrator | 2026-02-04 00:47:38 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:38.405651 | orchestrator | 2026-02-04 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:41.421669 | orchestrator | 2026-02-04 00:47:41 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:41.422080 | orchestrator | 2026-02-04 00:47:41 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:41.423135 | orchestrator | 2026-02-04 00:47:41 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:41.424531 | orchestrator | 2026-02-04 00:47:41 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:41.427453 | orchestrator | 2026-02-04 00:47:41 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:41.427540 | orchestrator | 2026-02-04 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:44.503297 | orchestrator | 2026-02-04 00:47:44 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:44.503956 | orchestrator | 2026-02-04 00:47:44 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:44.505046 | orchestrator | 2026-02-04 00:47:44 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:44.506624 | orchestrator | 2026-02-04 00:47:44 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:44.507443 | orchestrator | 2026-02-04 00:47:44 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:44.507509 | orchestrator | 2026-02-04 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:47.534174 | orchestrator | 2026-02-04 00:47:47 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:47.534843 | orchestrator | 2026-02-04 00:47:47 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:47.535364 | orchestrator | 2026-02-04 00:47:47 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:47.536047 | orchestrator | 2026-02-04 00:47:47 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:47.536840 | orchestrator | 2026-02-04 00:47:47 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:47.536920 | orchestrator | 2026-02-04 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:50.562572 | orchestrator | 2026-02-04 00:47:50 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:50.563056 | orchestrator | 2026-02-04 00:47:50 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:50.564021 | orchestrator | 2026-02-04 00:47:50 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:50.565023 | orchestrator | 2026-02-04 00:47:50 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:50.565933 | orchestrator | 2026-02-04 00:47:50 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:50.566078 | orchestrator | 2026-02-04 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:53.616793 | orchestrator | 2026-02-04 00:47:53 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:53.617392 | orchestrator | 2026-02-04 00:47:53 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:53.618265 | orchestrator | 2026-02-04 00:47:53 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:53.619569 | orchestrator | 2026-02-04 00:47:53 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:53.620321 | orchestrator | 2026-02-04 00:47:53 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:53.620387 | orchestrator | 2026-02-04 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:56.648675 | orchestrator | 2026-02-04 00:47:56 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:56.649390 | orchestrator | 2026-02-04 00:47:56 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:56.650229 | orchestrator | 2026-02-04 00:47:56 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:56.650749 | orchestrator | 2026-02-04 00:47:56 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:56.651566 | orchestrator | 2026-02-04 00:47:56 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:56.651608 | orchestrator | 2026-02-04 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:47:59.674617 | orchestrator | 2026-02-04 00:47:59 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:47:59.675694 | orchestrator | 2026-02-04 00:47:59 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:47:59.675732 | orchestrator | 2026-02-04 00:47:59 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:47:59.676330 | orchestrator | 2026-02-04 00:47:59 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:47:59.677153 | orchestrator | 2026-02-04 00:47:59 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:47:59.677210 | orchestrator | 2026-02-04 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:02.704760 | orchestrator | 2026-02-04 00:48:02 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:02.705135 | orchestrator | 2026-02-04 00:48:02 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:02.705575 | orchestrator | 2026-02-04 00:48:02 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:02.706327 | orchestrator | 2026-02-04 00:48:02 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:02.706880 | orchestrator | 2026-02-04 00:48:02 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:02.707013 | orchestrator | 2026-02-04 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:05.741784 | orchestrator | 2026-02-04 00:48:05 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:05.741924 | orchestrator | 2026-02-04 00:48:05 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:05.743388 | orchestrator | 2026-02-04 00:48:05 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:05.743875 | orchestrator | 2026-02-04 00:48:05 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:05.744555 | orchestrator | 2026-02-04 00:48:05 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:05.744677 | orchestrator | 2026-02-04 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:08.781562 | orchestrator | 2026-02-04 00:48:08 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:08.782269 | orchestrator | 2026-02-04 00:48:08 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:08.783406 | orchestrator | 2026-02-04 00:48:08 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:08.786063 | orchestrator | 2026-02-04 00:48:08 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:08.788362 | orchestrator | 2026-02-04 00:48:08 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:08.788437 | orchestrator | 2026-02-04 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:11.826448 | orchestrator | 2026-02-04 00:48:11 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:11.826797 | orchestrator | 2026-02-04 00:48:11 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:11.827560 | orchestrator | 2026-02-04 00:48:11 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:11.829785 | orchestrator | 2026-02-04 00:48:11 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:11.829867 | orchestrator | 2026-02-04 00:48:11 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:11.829884 | orchestrator | 2026-02-04 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:14.866873 | orchestrator | 2026-02-04 00:48:14 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:14.866978 | orchestrator | 2026-02-04 00:48:14 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:14.869374 | orchestrator | 2026-02-04 00:48:14 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:14.869982 | orchestrator | 2026-02-04 00:48:14 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:14.871014 | orchestrator | 2026-02-04 00:48:14 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:14.871070 | orchestrator | 2026-02-04 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:17.907616 | orchestrator | 2026-02-04 00:48:17 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:17.909401 | orchestrator | 2026-02-04 00:48:17 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:17.911787 | orchestrator | 2026-02-04 00:48:17 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:17.913172 | orchestrator | 2026-02-04 00:48:17 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:17.914501 | orchestrator | 2026-02-04 00:48:17 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:17.914552 | orchestrator | 2026-02-04 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:20.949188 | orchestrator | 2026-02-04 00:48:20 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:20.949287 | orchestrator | 2026-02-04 00:48:20 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:20.950571 | orchestrator | 2026-02-04 00:48:20 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:20.952058 | orchestrator | 2026-02-04 00:48:20 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:20.953285 | orchestrator | 2026-02-04 00:48:20 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:20.953353 | orchestrator | 2026-02-04 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:23.996492 | orchestrator | 2026-02-04 00:48:23 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:23.997989 | orchestrator | 2026-02-04 00:48:23 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:23.998802 | orchestrator | 2026-02-04 00:48:23 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:23.999589 | orchestrator | 2026-02-04 00:48:24 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:24.017415 | orchestrator | 2026-02-04 00:48:24 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:24.017691 | orchestrator | 2026-02-04 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:27.052792 | orchestrator | 2026-02-04 00:48:27 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:27.054249 | orchestrator | 2026-02-04 00:48:27 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:27.055245 | orchestrator | 2026-02-04 00:48:27 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:27.056438 | orchestrator | 2026-02-04 00:48:27 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:27.057297 | orchestrator | 2026-02-04 00:48:27 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:27.057429 | orchestrator | 2026-02-04 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:30.079161 | orchestrator | 2026-02-04 00:48:30 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:30.079989 | orchestrator | 2026-02-04 00:48:30 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:30.080649 | orchestrator | 2026-02-04 00:48:30 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:30.081652 | orchestrator | 2026-02-04 00:48:30 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:30.082611 | orchestrator | 2026-02-04 00:48:30 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:30.082636 | orchestrator | 2026-02-04 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:33.117149 | orchestrator | 2026-02-04 00:48:33 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:33.118947 | orchestrator | 2026-02-04 00:48:33 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:33.123375 | orchestrator | 2026-02-04 00:48:33 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:33.123434 | orchestrator | 2026-02-04 00:48:33 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:33.123444 | orchestrator | 2026-02-04 00:48:33 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:33.123453 | orchestrator | 2026-02-04 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:36.159455 | orchestrator | 2026-02-04 00:48:36 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:36.160081 | orchestrator | 2026-02-04 00:48:36 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:36.161190 | orchestrator | 2026-02-04 00:48:36 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:36.163430 | orchestrator | 2026-02-04 00:48:36 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:36.164600 | orchestrator | 2026-02-04 00:48:36 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:36.164931 | orchestrator | 2026-02-04 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:39.202416 | orchestrator | 2026-02-04 00:48:39 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:39.203407 | orchestrator | 2026-02-04 00:48:39 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:39.204739 | orchestrator | 2026-02-04 00:48:39 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:39.206221 | orchestrator | 2026-02-04 00:48:39 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:39.207595 | orchestrator | 2026-02-04 00:48:39 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:39.207643 | orchestrator | 2026-02-04 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:42.259812 | orchestrator | 2026-02-04 00:48:42 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:42.260036 | orchestrator | 2026-02-04 00:48:42 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:42.260787 | orchestrator | 2026-02-04 00:48:42 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:42.261373 | orchestrator | 2026-02-04 00:48:42 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:42.263074 | orchestrator | 2026-02-04 00:48:42 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:42.264635 | orchestrator | 2026-02-04 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:45.432857 | orchestrator | 2026-02-04 00:48:45 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:45.433116 | orchestrator | 2026-02-04 00:48:45 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:45.433630 | orchestrator | 2026-02-04 00:48:45 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:45.434290 | orchestrator | 2026-02-04 00:48:45 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:45.434792 | orchestrator | 2026-02-04 00:48:45 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:45.434846 | orchestrator | 2026-02-04 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:48.454010 | orchestrator | 2026-02-04 00:48:48 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:48.454547 | orchestrator | 2026-02-04 00:48:48 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:48.455025 | orchestrator | 2026-02-04 00:48:48 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:48.455631 | orchestrator | 2026-02-04 00:48:48 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:48.456465 | orchestrator | 2026-02-04 00:48:48 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:48.456526 | orchestrator | 2026-02-04 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:51.483946 | orchestrator | 2026-02-04 00:48:51 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:51.484074 | orchestrator | 2026-02-04 00:48:51 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:51.484572 | orchestrator | 2026-02-04 00:48:51 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state STARTED 2026-02-04 00:48:51.484937 | orchestrator | 2026-02-04 00:48:51 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:51.486201 | orchestrator | 2026-02-04 00:48:51 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:51.486227 | orchestrator | 2026-02-04 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:54.522728 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:54.522869 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:54.523770 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task 8127556d-45ed-46b3-aff3-f9b5bacee0f9 is in state STARTED 2026-02-04 00:48:54.524789 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task 7581172f-6e48-4c82-8cea-78cf30ed8e64 is in state SUCCESS 2026-02-04 00:48:54.526898 | orchestrator | 2026-02-04 00:48:54.526935 | orchestrator | 2026-02-04 00:48:54.526941 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-04 00:48:54.526947 | orchestrator | 2026-02-04 00:48:54.526953 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-04 00:48:54.526958 | orchestrator | Wednesday 04 February 2026 00:44:27 +0000 (0:00:00.138) 0:00:00.138 **** 2026-02-04 00:48:54.526963 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:48:54.526970 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:48:54.526975 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:48:54.526979 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.526984 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.526989 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.526993 | orchestrator | 2026-02-04 00:48:54.526998 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-04 00:48:54.527003 | orchestrator | Wednesday 04 February 2026 00:44:28 +0000 (0:00:00.574) 0:00:00.713 **** 2026-02-04 00:48:54.527008 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527015 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527023 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527035 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527047 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527055 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527062 | orchestrator | 2026-02-04 00:48:54.527069 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-04 00:48:54.527076 | orchestrator | Wednesday 04 February 2026 00:44:29 +0000 (0:00:00.684) 0:00:01.398 **** 2026-02-04 00:48:54.527083 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527154 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527162 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527170 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527177 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527185 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527193 | orchestrator | 2026-02-04 00:48:54.527200 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-04 00:48:54.527209 | orchestrator | Wednesday 04 February 2026 00:44:29 +0000 (0:00:00.670) 0:00:02.069 **** 2026-02-04 00:48:54.527215 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.527223 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.527231 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.527238 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.527247 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.527252 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.527257 | orchestrator | 2026-02-04 00:48:54.527262 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-04 00:48:54.527267 | orchestrator | Wednesday 04 February 2026 00:44:31 +0000 (0:00:01.927) 0:00:03.996 **** 2026-02-04 00:48:54.527272 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.527302 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.527307 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.527311 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.527322 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.527327 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.527331 | orchestrator | 2026-02-04 00:48:54.527336 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-04 00:48:54.527341 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:01.319) 0:00:05.315 **** 2026-02-04 00:48:54.527345 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.527350 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.527355 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.527359 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.527364 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.527368 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.527373 | orchestrator | 2026-02-04 00:48:54.527387 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-04 00:48:54.527392 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:01.443) 0:00:06.759 **** 2026-02-04 00:48:54.527397 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527401 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527406 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527410 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527415 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527419 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527424 | orchestrator | 2026-02-04 00:48:54.527429 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-04 00:48:54.527433 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:00.777) 0:00:07.536 **** 2026-02-04 00:48:54.527438 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527443 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527447 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527452 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527456 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527461 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527465 | orchestrator | 2026-02-04 00:48:54.527470 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-04 00:48:54.527474 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:00.950) 0:00:08.487 **** 2026-02-04 00:48:54.527479 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:48:54.527486 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:48:54.527509 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527516 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:48:54.527523 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:48:54.527530 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527537 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:48:54.527543 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:48:54.527549 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527556 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:48:54.527577 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:48:54.527584 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527592 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:48:54.527599 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:48:54.527608 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527616 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 00:48:54.527632 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 00:48:54.527640 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527649 | orchestrator | 2026-02-04 00:48:54.527670 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-04 00:48:54.527678 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:00.454) 0:00:08.941 **** 2026-02-04 00:48:54.527687 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527695 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527703 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527712 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527719 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527734 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527742 | orchestrator | 2026-02-04 00:48:54.527752 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-04 00:48:54.527761 | orchestrator | Wednesday 04 February 2026 00:44:38 +0000 (0:00:01.734) 0:00:10.675 **** 2026-02-04 00:48:54.527769 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:48:54.527778 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:48:54.527786 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:48:54.527794 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.527803 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.527810 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.527819 | orchestrator | 2026-02-04 00:48:54.527827 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-04 00:48:54.527835 | orchestrator | Wednesday 04 February 2026 00:44:39 +0000 (0:00:00.945) 0:00:11.621 **** 2026-02-04 00:48:54.527844 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.527852 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.527860 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.527870 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.527877 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.527885 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.527894 | orchestrator | 2026-02-04 00:48:54.527901 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-04 00:48:54.527910 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:05.124) 0:00:16.745 **** 2026-02-04 00:48:54.527918 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.527925 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.527934 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.527941 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.527949 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.527960 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.527968 | orchestrator | 2026-02-04 00:48:54.527977 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-04 00:48:54.527985 | orchestrator | Wednesday 04 February 2026 00:44:45 +0000 (0:00:01.190) 0:00:17.936 **** 2026-02-04 00:48:54.527993 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.528001 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.528009 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.528018 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.528026 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.528033 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.528040 | orchestrator | 2026-02-04 00:48:54.528054 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-04 00:48:54.528065 | orchestrator | Wednesday 04 February 2026 00:44:47 +0000 (0:00:01.839) 0:00:19.775 **** 2026-02-04 00:48:54.528073 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.528085 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.528093 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.528102 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.528110 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.528121 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.528133 | orchestrator | 2026-02-04 00:48:54.528139 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-04 00:48:54.528147 | orchestrator | Wednesday 04 February 2026 00:44:49 +0000 (0:00:01.506) 0:00:21.282 **** 2026-02-04 00:48:54.528154 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-04 00:48:54.528162 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-04 00:48:54.528170 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.528177 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-04 00:48:54.528185 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-04 00:48:54.528192 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.528201 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-04 00:48:54.528208 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-04 00:48:54.528217 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.528225 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-04 00:48:54.528234 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-04 00:48:54.528242 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.528251 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-04 00:48:54.528259 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-04 00:48:54.528268 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.528276 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-04 00:48:54.528285 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-04 00:48:54.528293 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.528301 | orchestrator | 2026-02-04 00:48:54.528310 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-04 00:48:54.528326 | orchestrator | Wednesday 04 February 2026 00:44:50 +0000 (0:00:01.815) 0:00:23.097 **** 2026-02-04 00:48:54.528335 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.528344 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.528352 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.528360 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.528369 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.528377 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.528386 | orchestrator | 2026-02-04 00:48:54.528395 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-04 00:48:54.528404 | orchestrator | Wednesday 04 February 2026 00:44:51 +0000 (0:00:00.908) 0:00:24.005 **** 2026-02-04 00:48:54.528412 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.528421 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.528429 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.528437 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.528446 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.528454 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.528463 | orchestrator | 2026-02-04 00:48:54.528471 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-04 00:48:54.528480 | orchestrator | 2026-02-04 00:48:54.528489 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-04 00:48:54.528582 | orchestrator | Wednesday 04 February 2026 00:44:53 +0000 (0:00:01.297) 0:00:25.303 **** 2026-02-04 00:48:54.528591 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.528600 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.528609 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.528617 | orchestrator | 2026-02-04 00:48:54.528626 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-04 00:48:54.528634 | orchestrator | Wednesday 04 February 2026 00:44:54 +0000 (0:00:01.432) 0:00:26.736 **** 2026-02-04 00:48:54.528643 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.528652 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.528660 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.528676 | orchestrator | 2026-02-04 00:48:54.528684 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-04 00:48:54.528693 | orchestrator | Wednesday 04 February 2026 00:44:55 +0000 (0:00:01.181) 0:00:27.917 **** 2026-02-04 00:48:54.528701 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.528710 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.528718 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.528726 | orchestrator | 2026-02-04 00:48:54.528744 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-04 00:48:54.528752 | orchestrator | Wednesday 04 February 2026 00:44:56 +0000 (0:00:01.005) 0:00:28.923 **** 2026-02-04 00:48:54.528761 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.528769 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.528778 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.528786 | orchestrator | 2026-02-04 00:48:54.528795 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-04 00:48:54.528803 | orchestrator | Wednesday 04 February 2026 00:44:57 +0000 (0:00:00.922) 0:00:29.846 **** 2026-02-04 00:48:54.528812 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.528820 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.528829 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.528837 | orchestrator | 2026-02-04 00:48:54.528846 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-04 00:48:54.528855 | orchestrator | Wednesday 04 February 2026 00:44:58 +0000 (0:00:00.371) 0:00:30.218 **** 2026-02-04 00:48:54.528864 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.528872 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.528881 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.528889 | orchestrator | 2026-02-04 00:48:54.528898 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-04 00:48:54.528906 | orchestrator | Wednesday 04 February 2026 00:44:59 +0000 (0:00:01.095) 0:00:31.314 **** 2026-02-04 00:48:54.528915 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.528923 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.528932 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.528940 | orchestrator | 2026-02-04 00:48:54.528949 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-04 00:48:54.528958 | orchestrator | Wednesday 04 February 2026 00:45:00 +0000 (0:00:01.547) 0:00:32.861 **** 2026-02-04 00:48:54.528967 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:48:54.528975 | orchestrator | 2026-02-04 00:48:54.528984 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-04 00:48:54.528992 | orchestrator | Wednesday 04 February 2026 00:45:01 +0000 (0:00:00.438) 0:00:33.300 **** 2026-02-04 00:48:54.529001 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.529009 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.529018 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.529027 | orchestrator | 2026-02-04 00:48:54.529579 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-04 00:48:54.529603 | orchestrator | Wednesday 04 February 2026 00:45:03 +0000 (0:00:02.120) 0:00:35.421 **** 2026-02-04 00:48:54.529612 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.529621 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.529630 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.529639 | orchestrator | 2026-02-04 00:48:54.529648 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-04 00:48:54.529657 | orchestrator | Wednesday 04 February 2026 00:45:03 +0000 (0:00:00.698) 0:00:36.119 **** 2026-02-04 00:48:54.529665 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.529674 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.529683 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.529692 | orchestrator | 2026-02-04 00:48:54.529700 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-04 00:48:54.529719 | orchestrator | Wednesday 04 February 2026 00:45:05 +0000 (0:00:01.149) 0:00:37.269 **** 2026-02-04 00:48:54.529728 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.529737 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.529746 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.529754 | orchestrator | 2026-02-04 00:48:54.529763 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-04 00:48:54.529782 | orchestrator | Wednesday 04 February 2026 00:45:06 +0000 (0:00:01.351) 0:00:38.620 **** 2026-02-04 00:48:54.529791 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.529800 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.529808 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.529817 | orchestrator | 2026-02-04 00:48:54.529826 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-04 00:48:54.529834 | orchestrator | Wednesday 04 February 2026 00:45:07 +0000 (0:00:00.752) 0:00:39.372 **** 2026-02-04 00:48:54.529843 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.529852 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.529860 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.529869 | orchestrator | 2026-02-04 00:48:54.529878 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-04 00:48:54.529887 | orchestrator | Wednesday 04 February 2026 00:45:07 +0000 (0:00:00.410) 0:00:39.783 **** 2026-02-04 00:48:54.529895 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.529904 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.529913 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.529922 | orchestrator | 2026-02-04 00:48:54.529930 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-04 00:48:54.529939 | orchestrator | Wednesday 04 February 2026 00:45:09 +0000 (0:00:01.797) 0:00:41.581 **** 2026-02-04 00:48:54.529948 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.529957 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.529965 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.529974 | orchestrator | 2026-02-04 00:48:54.529982 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-04 00:48:54.529991 | orchestrator | Wednesday 04 February 2026 00:45:12 +0000 (0:00:02.818) 0:00:44.399 **** 2026-02-04 00:48:54.530000 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530008 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530083 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530093 | orchestrator | 2026-02-04 00:48:54.530102 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-04 00:48:54.530111 | orchestrator | Wednesday 04 February 2026 00:45:12 +0000 (0:00:00.660) 0:00:45.060 **** 2026-02-04 00:48:54.530120 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 00:48:54.530131 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 00:48:54.530145 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 00:48:54.530154 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 00:48:54.530163 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 00:48:54.530172 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 00:48:54.530181 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 00:48:54.530190 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 00:48:54.530209 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 00:48:54.530218 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 00:48:54.530227 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 00:48:54.530236 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 00:48:54.530246 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 00:48:54.530255 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 00:48:54.530263 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 00:48:54.530271 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530279 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530288 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530297 | orchestrator | 2026-02-04 00:48:54.530306 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-04 00:48:54.530315 | orchestrator | Wednesday 04 February 2026 00:46:06 +0000 (0:00:54.000) 0:01:39.060 **** 2026-02-04 00:48:54.530325 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.530334 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.530341 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.530348 | orchestrator | 2026-02-04 00:48:54.530356 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-04 00:48:54.530369 | orchestrator | Wednesday 04 February 2026 00:46:07 +0000 (0:00:00.464) 0:01:39.525 **** 2026-02-04 00:48:54.530376 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530384 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530392 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530399 | orchestrator | 2026-02-04 00:48:54.530407 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-04 00:48:54.530414 | orchestrator | Wednesday 04 February 2026 00:46:08 +0000 (0:00:01.027) 0:01:40.553 **** 2026-02-04 00:48:54.530421 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530428 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530436 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530443 | orchestrator | 2026-02-04 00:48:54.530451 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-04 00:48:54.530458 | orchestrator | Wednesday 04 February 2026 00:46:09 +0000 (0:00:01.048) 0:01:41.601 **** 2026-02-04 00:48:54.530466 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530474 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530483 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530491 | orchestrator | 2026-02-04 00:48:54.530523 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-04 00:48:54.530531 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:27.174) 0:02:08.775 **** 2026-02-04 00:48:54.530537 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530542 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530547 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530552 | orchestrator | 2026-02-04 00:48:54.530556 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-04 00:48:54.530561 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.629) 0:02:09.404 **** 2026-02-04 00:48:54.530565 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530570 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530575 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530585 | orchestrator | 2026-02-04 00:48:54.530590 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-04 00:48:54.530595 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:00.665) 0:02:10.070 **** 2026-02-04 00:48:54.530600 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530604 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530609 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530614 | orchestrator | 2026-02-04 00:48:54.530619 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-04 00:48:54.530623 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:00.713) 0:02:10.783 **** 2026-02-04 00:48:54.530628 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530632 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530637 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530641 | orchestrator | 2026-02-04 00:48:54.530650 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-04 00:48:54.530656 | orchestrator | Wednesday 04 February 2026 00:46:39 +0000 (0:00:00.941) 0:02:11.725 **** 2026-02-04 00:48:54.530660 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530665 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530669 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530674 | orchestrator | 2026-02-04 00:48:54.530679 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-04 00:48:54.530684 | orchestrator | Wednesday 04 February 2026 00:46:39 +0000 (0:00:00.295) 0:02:12.021 **** 2026-02-04 00:48:54.530688 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530693 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530697 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530702 | orchestrator | 2026-02-04 00:48:54.530706 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-04 00:48:54.530711 | orchestrator | Wednesday 04 February 2026 00:46:40 +0000 (0:00:00.710) 0:02:12.732 **** 2026-02-04 00:48:54.530716 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530721 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530726 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530730 | orchestrator | 2026-02-04 00:48:54.530735 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-04 00:48:54.530740 | orchestrator | Wednesday 04 February 2026 00:46:41 +0000 (0:00:00.612) 0:02:13.345 **** 2026-02-04 00:48:54.530744 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530749 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530754 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530758 | orchestrator | 2026-02-04 00:48:54.530763 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-04 00:48:54.530768 | orchestrator | Wednesday 04 February 2026 00:46:42 +0000 (0:00:01.014) 0:02:14.359 **** 2026-02-04 00:48:54.530773 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:48:54.530777 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:48:54.530782 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:48:54.530786 | orchestrator | 2026-02-04 00:48:54.530791 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-04 00:48:54.530796 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.933) 0:02:15.293 **** 2026-02-04 00:48:54.530801 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.530805 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.530810 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.530814 | orchestrator | 2026-02-04 00:48:54.530819 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-04 00:48:54.530824 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.284) 0:02:15.577 **** 2026-02-04 00:48:54.530829 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.530833 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.530838 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.530842 | orchestrator | 2026-02-04 00:48:54.530847 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-04 00:48:54.530859 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:00.320) 0:02:15.898 **** 2026-02-04 00:48:54.530864 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530869 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530873 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530878 | orchestrator | 2026-02-04 00:48:54.530882 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-04 00:48:54.530887 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:01.044) 0:02:16.942 **** 2026-02-04 00:48:54.530892 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.530901 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.530906 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.530911 | orchestrator | 2026-02-04 00:48:54.530915 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-04 00:48:54.530920 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:00.863) 0:02:17.806 **** 2026-02-04 00:48:54.530925 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 00:48:54.530930 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 00:48:54.530935 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 00:48:54.530940 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 00:48:54.530944 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 00:48:54.530949 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 00:48:54.530954 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 00:48:54.530959 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 00:48:54.530963 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 00:48:54.530968 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-04 00:48:54.530973 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 00:48:54.530978 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 00:48:54.530983 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-04 00:48:54.530988 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 00:48:54.530996 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 00:48:54.531013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 00:48:54.531017 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 00:48:54.531022 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 00:48:54.531027 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 00:48:54.531031 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 00:48:54.531036 | orchestrator | 2026-02-04 00:48:54.531040 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-04 00:48:54.531045 | orchestrator | 2026-02-04 00:48:54.531050 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-04 00:48:54.531054 | orchestrator | Wednesday 04 February 2026 00:46:49 +0000 (0:00:03.423) 0:02:21.229 **** 2026-02-04 00:48:54.531059 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:48:54.531069 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:48:54.531076 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:48:54.531084 | orchestrator | 2026-02-04 00:48:54.531091 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-04 00:48:54.531103 | orchestrator | Wednesday 04 February 2026 00:46:49 +0000 (0:00:00.471) 0:02:21.700 **** 2026-02-04 00:48:54.531114 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:48:54.531123 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:48:54.531130 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:48:54.531138 | orchestrator | 2026-02-04 00:48:54.531146 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-04 00:48:54.531153 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:00.611) 0:02:22.312 **** 2026-02-04 00:48:54.531160 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:48:54.531168 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:48:54.531176 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:48:54.531184 | orchestrator | 2026-02-04 00:48:54.531192 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-04 00:48:54.531198 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:00.397) 0:02:22.710 **** 2026-02-04 00:48:54.531206 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:48:54.531215 | orchestrator | 2026-02-04 00:48:54.531223 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-04 00:48:54.531230 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:00.644) 0:02:23.354 **** 2026-02-04 00:48:54.531239 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.531247 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.531255 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.531263 | orchestrator | 2026-02-04 00:48:54.531271 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-04 00:48:54.531278 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:00.332) 0:02:23.687 **** 2026-02-04 00:48:54.531283 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.531290 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.531297 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.531304 | orchestrator | 2026-02-04 00:48:54.531312 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-04 00:48:54.531325 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:00.296) 0:02:23.984 **** 2026-02-04 00:48:54.531333 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.531340 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.531348 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.531355 | orchestrator | 2026-02-04 00:48:54.531362 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-04 00:48:54.531370 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.389) 0:02:24.373 **** 2026-02-04 00:48:54.531377 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.531385 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.531392 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.531401 | orchestrator | 2026-02-04 00:48:54.531409 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-04 00:48:54.531417 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:01.023) 0:02:25.396 **** 2026-02-04 00:48:54.531424 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.531432 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.531440 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.531445 | orchestrator | 2026-02-04 00:48:54.531450 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-04 00:48:54.531455 | orchestrator | Wednesday 04 February 2026 00:46:54 +0000 (0:00:01.155) 0:02:26.552 **** 2026-02-04 00:48:54.531459 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.531464 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.531469 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.531480 | orchestrator | 2026-02-04 00:48:54.531485 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-04 00:48:54.531490 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:01.312) 0:02:27.864 **** 2026-02-04 00:48:54.531522 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:48:54.531527 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:48:54.531532 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:48:54.531536 | orchestrator | 2026-02-04 00:48:54.531541 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-04 00:48:54.531546 | orchestrator | 2026-02-04 00:48:54.531551 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-04 00:48:54.531555 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:10.875) 0:02:38.739 **** 2026-02-04 00:48:54.531560 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.531565 | orchestrator | 2026-02-04 00:48:54.531570 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-04 00:48:54.531574 | orchestrator | Wednesday 04 February 2026 00:47:07 +0000 (0:00:01.409) 0:02:40.149 **** 2026-02-04 00:48:54.531579 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531584 | orchestrator | 2026-02-04 00:48:54.531594 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 00:48:54.531599 | orchestrator | Wednesday 04 February 2026 00:47:08 +0000 (0:00:00.410) 0:02:40.559 **** 2026-02-04 00:48:54.531604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 00:48:54.531609 | orchestrator | 2026-02-04 00:48:54.531613 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 00:48:54.531618 | orchestrator | Wednesday 04 February 2026 00:47:08 +0000 (0:00:00.612) 0:02:41.172 **** 2026-02-04 00:48:54.531623 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531628 | orchestrator | 2026-02-04 00:48:54.531633 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-04 00:48:54.531638 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:00.976) 0:02:42.148 **** 2026-02-04 00:48:54.531642 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531647 | orchestrator | 2026-02-04 00:48:54.531652 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-04 00:48:54.531657 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:00.585) 0:02:42.734 **** 2026-02-04 00:48:54.531662 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:48:54.531666 | orchestrator | 2026-02-04 00:48:54.531671 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-04 00:48:54.531676 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:01.427) 0:02:44.161 **** 2026-02-04 00:48:54.531680 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:48:54.531685 | orchestrator | 2026-02-04 00:48:54.531690 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-04 00:48:54.531694 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.766) 0:02:44.927 **** 2026-02-04 00:48:54.531700 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531705 | orchestrator | 2026-02-04 00:48:54.531709 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-04 00:48:54.531714 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.397) 0:02:45.325 **** 2026-02-04 00:48:54.531719 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531724 | orchestrator | 2026-02-04 00:48:54.531728 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-04 00:48:54.531733 | orchestrator | 2026-02-04 00:48:54.531737 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-04 00:48:54.531742 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.581) 0:02:45.907 **** 2026-02-04 00:48:54.531747 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.531752 | orchestrator | 2026-02-04 00:48:54.531756 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-04 00:48:54.531765 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.122) 0:02:46.029 **** 2026-02-04 00:48:54.531770 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:48:54.531775 | orchestrator | 2026-02-04 00:48:54.531779 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-04 00:48:54.531784 | orchestrator | Wednesday 04 February 2026 00:47:14 +0000 (0:00:00.231) 0:02:46.261 **** 2026-02-04 00:48:54.531789 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.531794 | orchestrator | 2026-02-04 00:48:54.531798 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-04 00:48:54.531803 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.962) 0:02:47.223 **** 2026-02-04 00:48:54.531812 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.531817 | orchestrator | 2026-02-04 00:48:54.531822 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-04 00:48:54.531827 | orchestrator | Wednesday 04 February 2026 00:47:16 +0000 (0:00:01.346) 0:02:48.570 **** 2026-02-04 00:48:54.531832 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531836 | orchestrator | 2026-02-04 00:48:54.531841 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-04 00:48:54.531846 | orchestrator | Wednesday 04 February 2026 00:47:17 +0000 (0:00:00.823) 0:02:49.394 **** 2026-02-04 00:48:54.531850 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.531855 | orchestrator | 2026-02-04 00:48:54.531860 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-04 00:48:54.531864 | orchestrator | Wednesday 04 February 2026 00:47:17 +0000 (0:00:00.400) 0:02:49.794 **** 2026-02-04 00:48:54.531869 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531874 | orchestrator | 2026-02-04 00:48:54.531878 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-04 00:48:54.531883 | orchestrator | Wednesday 04 February 2026 00:47:24 +0000 (0:00:06.714) 0:02:56.509 **** 2026-02-04 00:48:54.531888 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.531892 | orchestrator | 2026-02-04 00:48:54.531897 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-04 00:48:54.531902 | orchestrator | Wednesday 04 February 2026 00:47:35 +0000 (0:00:11.241) 0:03:07.751 **** 2026-02-04 00:48:54.531907 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.531911 | orchestrator | 2026-02-04 00:48:54.531916 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-04 00:48:54.531921 | orchestrator | 2026-02-04 00:48:54.531925 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-04 00:48:54.531930 | orchestrator | Wednesday 04 February 2026 00:47:35 +0000 (0:00:00.373) 0:03:08.124 **** 2026-02-04 00:48:54.531935 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.531940 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.531944 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.531949 | orchestrator | 2026-02-04 00:48:54.531953 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-04 00:48:54.531958 | orchestrator | Wednesday 04 February 2026 00:47:36 +0000 (0:00:00.251) 0:03:08.375 **** 2026-02-04 00:48:54.531963 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.531967 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.531972 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.531977 | orchestrator | 2026-02-04 00:48:54.531982 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-04 00:48:54.531989 | orchestrator | Wednesday 04 February 2026 00:47:36 +0000 (0:00:00.293) 0:03:08.668 **** 2026-02-04 00:48:54.531994 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:48:54.531999 | orchestrator | 2026-02-04 00:48:54.532004 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-04 00:48:54.532008 | orchestrator | Wednesday 04 February 2026 00:47:37 +0000 (0:00:00.574) 0:03:09.242 **** 2026-02-04 00:48:54.532017 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532022 | orchestrator | 2026-02-04 00:48:54.532026 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-04 00:48:54.532031 | orchestrator | Wednesday 04 February 2026 00:47:37 +0000 (0:00:00.742) 0:03:09.985 **** 2026-02-04 00:48:54.532036 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532041 | orchestrator | 2026-02-04 00:48:54.532046 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-04 00:48:54.532050 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:00.807) 0:03:10.793 **** 2026-02-04 00:48:54.532055 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532059 | orchestrator | 2026-02-04 00:48:54.532064 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-04 00:48:54.532069 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:00.100) 0:03:10.893 **** 2026-02-04 00:48:54.532074 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532079 | orchestrator | 2026-02-04 00:48:54.532083 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-04 00:48:54.532088 | orchestrator | Wednesday 04 February 2026 00:47:39 +0000 (0:00:00.796) 0:03:11.690 **** 2026-02-04 00:48:54.532093 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532098 | orchestrator | 2026-02-04 00:48:54.532102 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-04 00:48:54.532107 | orchestrator | Wednesday 04 February 2026 00:47:39 +0000 (0:00:00.102) 0:03:11.793 **** 2026-02-04 00:48:54.532112 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532117 | orchestrator | 2026-02-04 00:48:54.532122 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-04 00:48:54.532127 | orchestrator | Wednesday 04 February 2026 00:47:39 +0000 (0:00:00.079) 0:03:11.872 **** 2026-02-04 00:48:54.532131 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532136 | orchestrator | 2026-02-04 00:48:54.532141 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-04 00:48:54.532146 | orchestrator | Wednesday 04 February 2026 00:47:39 +0000 (0:00:00.080) 0:03:11.952 **** 2026-02-04 00:48:54.532150 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532155 | orchestrator | 2026-02-04 00:48:54.532160 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-04 00:48:54.532165 | orchestrator | Wednesday 04 February 2026 00:47:39 +0000 (0:00:00.091) 0:03:12.043 **** 2026-02-04 00:48:54.532170 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532175 | orchestrator | 2026-02-04 00:48:54.532179 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-04 00:48:54.532184 | orchestrator | Wednesday 04 February 2026 00:47:44 +0000 (0:00:04.825) 0:03:16.869 **** 2026-02-04 00:48:54.532189 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-04 00:48:54.532197 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-04 00:48:54.532202 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-04 00:48:54.532207 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-04 00:48:54.532212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-04 00:48:54.532217 | orchestrator | 2026-02-04 00:48:54.532221 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-04 00:48:54.532226 | orchestrator | Wednesday 04 February 2026 00:48:27 +0000 (0:00:42.552) 0:03:59.421 **** 2026-02-04 00:48:54.532231 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532236 | orchestrator | 2026-02-04 00:48:54.532244 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-04 00:48:54.532251 | orchestrator | Wednesday 04 February 2026 00:48:28 +0000 (0:00:01.042) 0:04:00.464 **** 2026-02-04 00:48:54.532259 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532273 | orchestrator | 2026-02-04 00:48:54.532280 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-04 00:48:54.532287 | orchestrator | Wednesday 04 February 2026 00:48:29 +0000 (0:00:01.419) 0:04:01.883 **** 2026-02-04 00:48:54.532295 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:48:54.532302 | orchestrator | 2026-02-04 00:48:54.532310 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-04 00:48:54.532317 | orchestrator | Wednesday 04 February 2026 00:48:30 +0000 (0:00:01.026) 0:04:02.910 **** 2026-02-04 00:48:54.532324 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532331 | orchestrator | 2026-02-04 00:48:54.532339 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-04 00:48:54.532347 | orchestrator | Wednesday 04 February 2026 00:48:30 +0000 (0:00:00.111) 0:04:03.021 **** 2026-02-04 00:48:54.532355 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-04 00:48:54.532362 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-04 00:48:54.532369 | orchestrator | 2026-02-04 00:48:54.532376 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-04 00:48:54.532384 | orchestrator | Wednesday 04 February 2026 00:48:32 +0000 (0:00:01.717) 0:04:04.739 **** 2026-02-04 00:48:54.532392 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532400 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.532409 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.532414 | orchestrator | 2026-02-04 00:48:54.532423 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-04 00:48:54.532428 | orchestrator | Wednesday 04 February 2026 00:48:32 +0000 (0:00:00.375) 0:04:05.114 **** 2026-02-04 00:48:54.532433 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.532438 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.532442 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.532447 | orchestrator | 2026-02-04 00:48:54.532451 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-04 00:48:54.532456 | orchestrator | 2026-02-04 00:48:54.532461 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-04 00:48:54.532465 | orchestrator | Wednesday 04 February 2026 00:48:34 +0000 (0:00:01.292) 0:04:06.407 **** 2026-02-04 00:48:54.532470 | orchestrator | ok: [testbed-manager] 2026-02-04 00:48:54.532474 | orchestrator | 2026-02-04 00:48:54.532479 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-04 00:48:54.532484 | orchestrator | Wednesday 04 February 2026 00:48:34 +0000 (0:00:00.138) 0:04:06.546 **** 2026-02-04 00:48:54.532488 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 00:48:54.532507 | orchestrator | 2026-02-04 00:48:54.532513 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-04 00:48:54.532517 | orchestrator | Wednesday 04 February 2026 00:48:34 +0000 (0:00:00.249) 0:04:06.796 **** 2026-02-04 00:48:54.532522 | orchestrator | changed: [testbed-manager] 2026-02-04 00:48:54.532527 | orchestrator | 2026-02-04 00:48:54.532531 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-04 00:48:54.532536 | orchestrator | 2026-02-04 00:48:54.532541 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-04 00:48:54.532545 | orchestrator | Wednesday 04 February 2026 00:48:40 +0000 (0:00:05.890) 0:04:12.687 **** 2026-02-04 00:48:54.532550 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:48:54.532555 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:48:54.532559 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:48:54.532564 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:48:54.532568 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:48:54.532573 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:48:54.532578 | orchestrator | 2026-02-04 00:48:54.532582 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-04 00:48:54.532592 | orchestrator | Wednesday 04 February 2026 00:48:41 +0000 (0:00:00.914) 0:04:13.601 **** 2026-02-04 00:48:54.532597 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 00:48:54.532601 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 00:48:54.532606 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 00:48:54.532611 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 00:48:54.532616 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 00:48:54.532620 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 00:48:54.532625 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 00:48:54.532630 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 00:48:54.532639 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 00:48:54.532644 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 00:48:54.532649 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 00:48:54.532653 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 00:48:54.532658 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 00:48:54.532663 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 00:48:54.532667 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 00:48:54.532672 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 00:48:54.532677 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 00:48:54.532681 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 00:48:54.532686 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 00:48:54.532690 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 00:48:54.532695 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 00:48:54.532699 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 00:48:54.532704 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 00:48:54.532709 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 00:48:54.532713 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 00:48:54.532718 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 00:48:54.532722 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 00:48:54.532731 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 00:48:54.532736 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 00:48:54.532740 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 00:48:54.532745 | orchestrator | 2026-02-04 00:48:54.532750 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-04 00:48:54.532754 | orchestrator | Wednesday 04 February 2026 00:48:51 +0000 (0:00:09.841) 0:04:23.442 **** 2026-02-04 00:48:54.532759 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.532763 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.532772 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.532777 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532781 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.532786 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.532791 | orchestrator | 2026-02-04 00:48:54.532795 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-04 00:48:54.532800 | orchestrator | Wednesday 04 February 2026 00:48:51 +0000 (0:00:00.517) 0:04:23.960 **** 2026-02-04 00:48:54.532805 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:48:54.532809 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:48:54.532814 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:48:54.532819 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:48:54.532823 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:48:54.532828 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:48:54.532832 | orchestrator | 2026-02-04 00:48:54.532837 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:48:54.532842 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:48:54.532847 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-04 00:48:54.532852 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 00:48:54.532857 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 00:48:54.532861 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:48:54.532866 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:48:54.532871 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:48:54.532876 | orchestrator | 2026-02-04 00:48:54.532881 | orchestrator | 2026-02-04 00:48:54.532886 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:48:54.532894 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:00.389) 0:04:24.349 **** 2026-02-04 00:48:54.532899 | orchestrator | =============================================================================== 2026-02-04 00:48:54.532904 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.00s 2026-02-04 00:48:54.532909 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.55s 2026-02-04 00:48:54.532913 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.17s 2026-02-04 00:48:54.532918 | orchestrator | kubectl : Install required packages ------------------------------------ 11.24s 2026-02-04 00:48:54.532923 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.88s 2026-02-04 00:48:54.532927 | orchestrator | Manage labels ----------------------------------------------------------- 9.84s 2026-02-04 00:48:54.532932 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.71s 2026-02-04 00:48:54.532937 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.89s 2026-02-04 00:48:54.532941 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.12s 2026-02-04 00:48:54.532946 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.83s 2026-02-04 00:48:54.532951 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.42s 2026-02-04 00:48:54.532959 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.82s 2026-02-04 00:48:54.532964 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.12s 2026-02-04 00:48:54.532968 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.93s 2026-02-04 00:48:54.532973 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.84s 2026-02-04 00:48:54.532978 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 1.82s 2026-02-04 00:48:54.532982 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.80s 2026-02-04 00:48:54.532987 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.73s 2026-02-04 00:48:54.532992 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.72s 2026-02-04 00:48:54.533000 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.55s 2026-02-04 00:48:54.533005 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:54.533010 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:54.533015 | orchestrator | 2026-02-04 00:48:54 | INFO  | Task 0bd83bee-b185-4664-a1f4-14600811c8a7 is in state STARTED 2026-02-04 00:48:54.533020 | orchestrator | 2026-02-04 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:48:57.560673 | orchestrator | 2026-02-04 00:48:57 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:48:57.563834 | orchestrator | 2026-02-04 00:48:57 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:48:57.566233 | orchestrator | 2026-02-04 00:48:57 | INFO  | Task 8127556d-45ed-46b3-aff3-f9b5bacee0f9 is in state STARTED 2026-02-04 00:48:57.567929 | orchestrator | 2026-02-04 00:48:57 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:48:57.569605 | orchestrator | 2026-02-04 00:48:57 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:48:57.571010 | orchestrator | 2026-02-04 00:48:57 | INFO  | Task 0bd83bee-b185-4664-a1f4-14600811c8a7 is in state STARTED 2026-02-04 00:48:57.571069 | orchestrator | 2026-02-04 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:00.605002 | orchestrator | 2026-02-04 00:49:00 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:49:00.605958 | orchestrator | 2026-02-04 00:49:00 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:00.606449 | orchestrator | 2026-02-04 00:49:00 | INFO  | Task 8127556d-45ed-46b3-aff3-f9b5bacee0f9 is in state STARTED 2026-02-04 00:49:00.607771 | orchestrator | 2026-02-04 00:49:00 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:00.608307 | orchestrator | 2026-02-04 00:49:00 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:00.608853 | orchestrator | 2026-02-04 00:49:00 | INFO  | Task 0bd83bee-b185-4664-a1f4-14600811c8a7 is in state SUCCESS 2026-02-04 00:49:00.608907 | orchestrator | 2026-02-04 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:03.640203 | orchestrator | 2026-02-04 00:49:03 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:49:03.641849 | orchestrator | 2026-02-04 00:49:03 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:03.641880 | orchestrator | 2026-02-04 00:49:03 | INFO  | Task 8127556d-45ed-46b3-aff3-f9b5bacee0f9 is in state SUCCESS 2026-02-04 00:49:03.642632 | orchestrator | 2026-02-04 00:49:03 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:03.643011 | orchestrator | 2026-02-04 00:49:03 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:03.643288 | orchestrator | 2026-02-04 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:06.675938 | orchestrator | 2026-02-04 00:49:06 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:49:06.677553 | orchestrator | 2026-02-04 00:49:06 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:06.680442 | orchestrator | 2026-02-04 00:49:06 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:06.682752 | orchestrator | 2026-02-04 00:49:06 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:06.682970 | orchestrator | 2026-02-04 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:09.713870 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:49:09.715414 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:09.718226 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:09.720581 | orchestrator | 2026-02-04 00:49:09 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:09.720622 | orchestrator | 2026-02-04 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:12.749265 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state STARTED 2026-02-04 00:49:12.751252 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:12.751322 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:12.752262 | orchestrator | 2026-02-04 00:49:12 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:12.752315 | orchestrator | 2026-02-04 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:15.787804 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task df8519fa-054d-4667-aa7e-5c42c0b1985b is in state SUCCESS 2026-02-04 00:49:15.788790 | orchestrator | 2026-02-04 00:49:15.788826 | orchestrator | 2026-02-04 00:49:15.788834 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-04 00:49:15.788841 | orchestrator | 2026-02-04 00:49:15.788848 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 00:49:15.788854 | orchestrator | Wednesday 04 February 2026 00:48:56 +0000 (0:00:00.156) 0:00:00.156 **** 2026-02-04 00:49:15.788861 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 00:49:15.788867 | orchestrator | 2026-02-04 00:49:15.788873 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 00:49:15.788881 | orchestrator | Wednesday 04 February 2026 00:48:57 +0000 (0:00:00.817) 0:00:00.974 **** 2026-02-04 00:49:15.788885 | orchestrator | changed: [testbed-manager] 2026-02-04 00:49:15.788889 | orchestrator | 2026-02-04 00:49:15.788893 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-04 00:49:15.788897 | orchestrator | Wednesday 04 February 2026 00:48:58 +0000 (0:00:00.951) 0:00:01.926 **** 2026-02-04 00:49:15.788901 | orchestrator | changed: [testbed-manager] 2026-02-04 00:49:15.788905 | orchestrator | 2026-02-04 00:49:15.788909 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:49:15.788914 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:15.788929 | orchestrator | 2026-02-04 00:49:15.788933 | orchestrator | 2026-02-04 00:49:15.788937 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:49:15.788941 | orchestrator | Wednesday 04 February 2026 00:48:58 +0000 (0:00:00.386) 0:00:02.313 **** 2026-02-04 00:49:15.788946 | orchestrator | =============================================================================== 2026-02-04 00:49:15.788952 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.95s 2026-02-04 00:49:15.788960 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-02-04 00:49:15.788969 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.39s 2026-02-04 00:49:15.788975 | orchestrator | 2026-02-04 00:49:15.788981 | orchestrator | 2026-02-04 00:49:15.788986 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-04 00:49:15.788992 | orchestrator | 2026-02-04 00:49:15.788998 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-04 00:49:15.789004 | orchestrator | Wednesday 04 February 2026 00:48:56 +0000 (0:00:00.148) 0:00:00.148 **** 2026-02-04 00:49:15.789010 | orchestrator | ok: [testbed-manager] 2026-02-04 00:49:15.789017 | orchestrator | 2026-02-04 00:49:15.789024 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-04 00:49:15.789030 | orchestrator | Wednesday 04 February 2026 00:48:56 +0000 (0:00:00.441) 0:00:00.589 **** 2026-02-04 00:49:15.789036 | orchestrator | ok: [testbed-manager] 2026-02-04 00:49:15.789044 | orchestrator | 2026-02-04 00:49:15.789048 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 00:49:15.789052 | orchestrator | Wednesday 04 February 2026 00:48:57 +0000 (0:00:00.535) 0:00:01.125 **** 2026-02-04 00:49:15.789056 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 00:49:15.789060 | orchestrator | 2026-02-04 00:49:15.789064 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 00:49:15.789067 | orchestrator | Wednesday 04 February 2026 00:48:57 +0000 (0:00:00.654) 0:00:01.779 **** 2026-02-04 00:49:15.789071 | orchestrator | changed: [testbed-manager] 2026-02-04 00:49:15.789075 | orchestrator | 2026-02-04 00:49:15.789079 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-04 00:49:15.789083 | orchestrator | Wednesday 04 February 2026 00:48:58 +0000 (0:00:01.140) 0:00:02.920 **** 2026-02-04 00:49:15.789087 | orchestrator | changed: [testbed-manager] 2026-02-04 00:49:15.789091 | orchestrator | 2026-02-04 00:49:15.789095 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-04 00:49:15.789099 | orchestrator | Wednesday 04 February 2026 00:48:59 +0000 (0:00:00.563) 0:00:03.483 **** 2026-02-04 00:49:15.789102 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:49:15.789106 | orchestrator | 2026-02-04 00:49:15.789110 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-04 00:49:15.789114 | orchestrator | Wednesday 04 February 2026 00:49:00 +0000 (0:00:01.431) 0:00:04.915 **** 2026-02-04 00:49:15.789118 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:49:15.789123 | orchestrator | 2026-02-04 00:49:15.789131 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-04 00:49:15.789140 | orchestrator | Wednesday 04 February 2026 00:49:01 +0000 (0:00:00.600) 0:00:05.515 **** 2026-02-04 00:49:15.789146 | orchestrator | ok: [testbed-manager] 2026-02-04 00:49:15.789152 | orchestrator | 2026-02-04 00:49:15.789158 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-04 00:49:15.789165 | orchestrator | Wednesday 04 February 2026 00:49:01 +0000 (0:00:00.316) 0:00:05.831 **** 2026-02-04 00:49:15.789171 | orchestrator | ok: [testbed-manager] 2026-02-04 00:49:15.789178 | orchestrator | 2026-02-04 00:49:15.789184 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:49:15.789197 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:15.789209 | orchestrator | 2026-02-04 00:49:15.789215 | orchestrator | 2026-02-04 00:49:15.789221 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:49:15.789227 | orchestrator | Wednesday 04 February 2026 00:49:02 +0000 (0:00:00.307) 0:00:06.138 **** 2026-02-04 00:49:15.789234 | orchestrator | =============================================================================== 2026-02-04 00:49:15.789240 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.43s 2026-02-04 00:49:15.789247 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2026-02-04 00:49:15.789254 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.65s 2026-02-04 00:49:15.789269 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.60s 2026-02-04 00:49:15.789276 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.56s 2026-02-04 00:49:15.789282 | orchestrator | Create .kube directory -------------------------------------------------- 0.54s 2026-02-04 00:49:15.789288 | orchestrator | Get home directory of operator user ------------------------------------- 0.44s 2026-02-04 00:49:15.789295 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.32s 2026-02-04 00:49:15.789301 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2026-02-04 00:49:15.789308 | orchestrator | 2026-02-04 00:49:15.789314 | orchestrator | 2026-02-04 00:49:15.789320 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-04 00:49:15.789326 | orchestrator | 2026-02-04 00:49:15.789332 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-04 00:49:15.789338 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:00.243) 0:00:00.243 **** 2026-02-04 00:49:15.789345 | orchestrator | ok: [localhost] => { 2026-02-04 00:49:15.789352 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-04 00:49:15.789358 | orchestrator | } 2026-02-04 00:49:15.789365 | orchestrator | 2026-02-04 00:49:15.789371 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-04 00:49:15.789378 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:00.147) 0:00:00.391 **** 2026-02-04 00:49:15.789385 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-04 00:49:15.789393 | orchestrator | ...ignoring 2026-02-04 00:49:15.789399 | orchestrator | 2026-02-04 00:49:15.789406 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-04 00:49:15.789413 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:03.300) 0:00:03.692 **** 2026-02-04 00:49:15.789420 | orchestrator | skipping: [localhost] 2026-02-04 00:49:15.789427 | orchestrator | 2026-02-04 00:49:15.789433 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-04 00:49:15.789440 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:00.064) 0:00:03.756 **** 2026-02-04 00:49:15.789447 | orchestrator | ok: [localhost] 2026-02-04 00:49:15.789454 | orchestrator | 2026-02-04 00:49:15.789460 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:49:15.789467 | orchestrator | 2026-02-04 00:49:15.789535 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:49:15.789543 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:00.306) 0:00:04.063 **** 2026-02-04 00:49:15.789550 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:15.789557 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:15.789564 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:15.789571 | orchestrator | 2026-02-04 00:49:15.789578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:49:15.789585 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:00.277) 0:00:04.340 **** 2026-02-04 00:49:15.789592 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-04 00:49:15.789604 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-04 00:49:15.789611 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-04 00:49:15.789618 | orchestrator | 2026-02-04 00:49:15.789625 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-04 00:49:15.789631 | orchestrator | 2026-02-04 00:49:15.789638 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 00:49:15.789644 | orchestrator | Wednesday 04 February 2026 00:47:00 +0000 (0:00:00.516) 0:00:04.856 **** 2026-02-04 00:49:15.789651 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:15.789658 | orchestrator | 2026-02-04 00:49:15.789664 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 00:49:15.789672 | orchestrator | Wednesday 04 February 2026 00:47:00 +0000 (0:00:00.497) 0:00:05.354 **** 2026-02-04 00:49:15.789678 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:15.789685 | orchestrator | 2026-02-04 00:49:15.789692 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-04 00:49:15.789699 | orchestrator | Wednesday 04 February 2026 00:47:01 +0000 (0:00:00.882) 0:00:06.237 **** 2026-02-04 00:49:15.789706 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.789713 | orchestrator | 2026-02-04 00:49:15.789720 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-04 00:49:15.789727 | orchestrator | Wednesday 04 February 2026 00:47:01 +0000 (0:00:00.334) 0:00:06.571 **** 2026-02-04 00:49:15.789733 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.789740 | orchestrator | 2026-02-04 00:49:15.789746 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-04 00:49:15.789753 | orchestrator | Wednesday 04 February 2026 00:47:02 +0000 (0:00:00.332) 0:00:06.904 **** 2026-02-04 00:49:15.789759 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.789766 | orchestrator | 2026-02-04 00:49:15.789777 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-04 00:49:15.789783 | orchestrator | Wednesday 04 February 2026 00:47:02 +0000 (0:00:00.358) 0:00:07.262 **** 2026-02-04 00:49:15.789790 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.789797 | orchestrator | 2026-02-04 00:49:15.789803 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 00:49:15.789809 | orchestrator | Wednesday 04 February 2026 00:47:03 +0000 (0:00:00.608) 0:00:07.871 **** 2026-02-04 00:49:15.789814 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:15.789818 | orchestrator | 2026-02-04 00:49:15.789822 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 00:49:15.789831 | orchestrator | Wednesday 04 February 2026 00:47:03 +0000 (0:00:00.676) 0:00:08.547 **** 2026-02-04 00:49:15.789835 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:15.789839 | orchestrator | 2026-02-04 00:49:15.789845 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-04 00:49:15.789851 | orchestrator | Wednesday 04 February 2026 00:47:04 +0000 (0:00:01.079) 0:00:09.626 **** 2026-02-04 00:49:15.789858 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.789864 | orchestrator | 2026-02-04 00:49:15.789871 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-04 00:49:15.789877 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:00.431) 0:00:10.057 **** 2026-02-04 00:49:15.789883 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.789889 | orchestrator | 2026-02-04 00:49:15.789896 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-04 00:49:15.789902 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:01.308) 0:00:11.366 **** 2026-02-04 00:49:15.789912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.789928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.789941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.789949 | orchestrator | 2026-02-04 00:49:15.789957 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-04 00:49:15.789964 | orchestrator | Wednesday 04 February 2026 00:47:08 +0000 (0:00:02.172) 0:00:13.538 **** 2026-02-04 00:49:15.789978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.789991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.789999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.790007 | orchestrator | 2026-02-04 00:49:15.790078 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-04 00:49:15.790087 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:02.176) 0:00:15.714 **** 2026-02-04 00:49:15.790095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 00:49:15.790102 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 00:49:15.790109 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 00:49:15.790117 | orchestrator | 2026-02-04 00:49:15.790124 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-04 00:49:15.790132 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:02.386) 0:00:18.101 **** 2026-02-04 00:49:15.790139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 00:49:15.790146 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 00:49:15.790154 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 00:49:15.790161 | orchestrator | 2026-02-04 00:49:15.790173 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-04 00:49:15.790267 | orchestrator | Wednesday 04 February 2026 00:47:16 +0000 (0:00:02.594) 0:00:20.696 **** 2026-02-04 00:49:15.790297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 00:49:15.790305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 00:49:15.790312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 00:49:15.790318 | orchestrator | 2026-02-04 00:49:15.790325 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-04 00:49:15.790332 | orchestrator | Wednesday 04 February 2026 00:47:18 +0000 (0:00:02.008) 0:00:22.704 **** 2026-02-04 00:49:15.790340 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 00:49:15.790347 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 00:49:15.790354 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 00:49:15.790361 | orchestrator | 2026-02-04 00:49:15.790368 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-04 00:49:15.790376 | orchestrator | Wednesday 04 February 2026 00:47:20 +0000 (0:00:01.957) 0:00:24.661 **** 2026-02-04 00:49:15.790383 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 00:49:15.790391 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 00:49:15.790437 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 00:49:15.790445 | orchestrator | 2026-02-04 00:49:15.790452 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-04 00:49:15.790458 | orchestrator | Wednesday 04 February 2026 00:47:21 +0000 (0:00:01.889) 0:00:26.551 **** 2026-02-04 00:49:15.790464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 00:49:15.790471 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 00:49:15.790477 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 00:49:15.790485 | orchestrator | 2026-02-04 00:49:15.790503 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 00:49:15.790509 | orchestrator | Wednesday 04 February 2026 00:47:23 +0000 (0:00:01.692) 0:00:28.243 **** 2026-02-04 00:49:15.790517 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.790524 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:15.790531 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:15.790539 | orchestrator | 2026-02-04 00:49:15.790546 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-04 00:49:15.790553 | orchestrator | Wednesday 04 February 2026 00:47:24 +0000 (0:00:00.557) 0:00:28.801 **** 2026-02-04 00:49:15.790562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.790591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.790601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:49:15.790609 | orchestrator | 2026-02-04 00:49:15.790616 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-04 00:49:15.790623 | orchestrator | Wednesday 04 February 2026 00:47:25 +0000 (0:00:01.465) 0:00:30.266 **** 2026-02-04 00:49:15.790630 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:15.790637 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:15.790645 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:15.790653 | orchestrator | 2026-02-04 00:49:15.790659 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-04 00:49:15.790667 | orchestrator | Wednesday 04 February 2026 00:47:27 +0000 (0:00:01.670) 0:00:31.937 **** 2026-02-04 00:49:15.790674 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:15.790682 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:15.790689 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:15.790696 | orchestrator | 2026-02-04 00:49:15.790703 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-04 00:49:15.790710 | orchestrator | Wednesday 04 February 2026 00:47:34 +0000 (0:00:07.080) 0:00:39.018 **** 2026-02-04 00:49:15.790717 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:15.790725 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:15.790733 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:15.790740 | orchestrator | 2026-02-04 00:49:15.790748 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 00:49:15.790755 | orchestrator | 2026-02-04 00:49:15.790762 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 00:49:15.790770 | orchestrator | Wednesday 04 February 2026 00:47:34 +0000 (0:00:00.461) 0:00:39.479 **** 2026-02-04 00:49:15.790778 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:15.790786 | orchestrator | 2026-02-04 00:49:15.790793 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 00:49:15.790806 | orchestrator | Wednesday 04 February 2026 00:47:35 +0000 (0:00:00.621) 0:00:40.101 **** 2026-02-04 00:49:15.790813 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:15.790820 | orchestrator | 2026-02-04 00:49:15.790828 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 00:49:15.790835 | orchestrator | Wednesday 04 February 2026 00:47:35 +0000 (0:00:00.190) 0:00:40.291 **** 2026-02-04 00:49:15.790842 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:15.790849 | orchestrator | 2026-02-04 00:49:15.790855 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 00:49:15.790861 | orchestrator | Wednesday 04 February 2026 00:47:37 +0000 (0:00:01.770) 0:00:42.062 **** 2026-02-04 00:49:15.790868 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:15.790875 | orchestrator | 2026-02-04 00:49:15.790882 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 00:49:15.790889 | orchestrator | 2026-02-04 00:49:15.790896 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 00:49:15.790903 | orchestrator | Wednesday 04 February 2026 00:48:34 +0000 (0:00:57.530) 0:01:39.592 **** 2026-02-04 00:49:15.790911 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:15.790918 | orchestrator | 2026-02-04 00:49:15.790925 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 00:49:15.790932 | orchestrator | Wednesday 04 February 2026 00:48:35 +0000 (0:00:00.713) 0:01:40.305 **** 2026-02-04 00:49:15.790940 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:15.790947 | orchestrator | 2026-02-04 00:49:15.790958 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 00:49:15.790965 | orchestrator | Wednesday 04 February 2026 00:48:35 +0000 (0:00:00.225) 0:01:40.531 **** 2026-02-04 00:49:15.790972 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:15.790980 | orchestrator | 2026-02-04 00:49:15.790988 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 00:49:15.790995 | orchestrator | Wednesday 04 February 2026 00:48:37 +0000 (0:00:01.927) 0:01:42.458 **** 2026-02-04 00:49:15.791003 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:15.791010 | orchestrator | 2026-02-04 00:49:15.791017 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 00:49:15.791025 | orchestrator | 2026-02-04 00:49:15.791032 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 00:49:15.791044 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:15.028) 0:01:57.487 **** 2026-02-04 00:49:15.791052 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:15.791060 | orchestrator | 2026-02-04 00:49:15.791067 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 00:49:15.791074 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:00.752) 0:01:58.240 **** 2026-02-04 00:49:15.791082 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:15.791089 | orchestrator | 2026-02-04 00:49:15.791097 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 00:49:15.791104 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:00.387) 0:01:58.627 **** 2026-02-04 00:49:15.791112 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:15.791119 | orchestrator | 2026-02-04 00:49:15.791126 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 00:49:15.791133 | orchestrator | Wednesday 04 February 2026 00:48:55 +0000 (0:00:01.725) 0:02:00.353 **** 2026-02-04 00:49:15.791141 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:15.791148 | orchestrator | 2026-02-04 00:49:15.791154 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-04 00:49:15.791161 | orchestrator | 2026-02-04 00:49:15.791167 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-04 00:49:15.791174 | orchestrator | Wednesday 04 February 2026 00:49:10 +0000 (0:00:15.001) 0:02:15.354 **** 2026-02-04 00:49:15.791180 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:15.791189 | orchestrator | 2026-02-04 00:49:15.791196 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-04 00:49:15.791202 | orchestrator | Wednesday 04 February 2026 00:49:11 +0000 (0:00:00.441) 0:02:15.796 **** 2026-02-04 00:49:15.791208 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 00:49:15.791214 | orchestrator | enable_outward_rabbitmq_True 2026-02-04 00:49:15.791221 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 00:49:15.791227 | orchestrator | outward_rabbitmq_restart 2026-02-04 00:49:15.791233 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:15.791241 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:15.791248 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:15.791255 | orchestrator | 2026-02-04 00:49:15.791262 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-04 00:49:15.791269 | orchestrator | skipping: no hosts matched 2026-02-04 00:49:15.791276 | orchestrator | 2026-02-04 00:49:15.791283 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-04 00:49:15.791290 | orchestrator | skipping: no hosts matched 2026-02-04 00:49:15.791297 | orchestrator | 2026-02-04 00:49:15.791304 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-04 00:49:15.791311 | orchestrator | skipping: no hosts matched 2026-02-04 00:49:15.791317 | orchestrator | 2026-02-04 00:49:15.791323 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:49:15.791329 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-04 00:49:15.791336 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 00:49:15.791342 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:49:15.791349 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 00:49:15.791356 | orchestrator | 2026-02-04 00:49:15.791363 | orchestrator | 2026-02-04 00:49:15.791370 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:49:15.791377 | orchestrator | Wednesday 04 February 2026 00:49:13 +0000 (0:00:02.615) 0:02:18.411 **** 2026-02-04 00:49:15.791384 | orchestrator | =============================================================================== 2026-02-04 00:49:15.791391 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.56s 2026-02-04 00:49:15.791398 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.08s 2026-02-04 00:49:15.791405 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.42s 2026-02-04 00:49:15.791412 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.30s 2026-02-04 00:49:15.791419 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.62s 2026-02-04 00:49:15.791426 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.59s 2026-02-04 00:49:15.791433 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.39s 2026-02-04 00:49:15.791440 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.18s 2026-02-04 00:49:15.791451 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.17s 2026-02-04 00:49:15.791458 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.09s 2026-02-04 00:49:15.791465 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.01s 2026-02-04 00:49:15.791472 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.96s 2026-02-04 00:49:15.791479 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.89s 2026-02-04 00:49:15.791507 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.69s 2026-02-04 00:49:15.791515 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.67s 2026-02-04 00:49:15.791526 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.47s 2026-02-04 00:49:15.791534 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.31s 2026-02-04 00:49:15.791541 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.08s 2026-02-04 00:49:15.791548 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.88s 2026-02-04 00:49:15.791555 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.80s 2026-02-04 00:49:15.791572 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:15.792218 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:15.793297 | orchestrator | 2026-02-04 00:49:15 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:15.793441 | orchestrator | 2026-02-04 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:18.824469 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:18.824824 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:18.825920 | orchestrator | 2026-02-04 00:49:18 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:18.826061 | orchestrator | 2026-02-04 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:21.868346 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:21.869078 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:21.869942 | orchestrator | 2026-02-04 00:49:21 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:21.870123 | orchestrator | 2026-02-04 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:24.900705 | orchestrator | 2026-02-04 00:49:24 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:24.901135 | orchestrator | 2026-02-04 00:49:24 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:24.902334 | orchestrator | 2026-02-04 00:49:24 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:24.902427 | orchestrator | 2026-02-04 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:27.930410 | orchestrator | 2026-02-04 00:49:27 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:27.932011 | orchestrator | 2026-02-04 00:49:27 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:27.935245 | orchestrator | 2026-02-04 00:49:27 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:27.935312 | orchestrator | 2026-02-04 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:30.989756 | orchestrator | 2026-02-04 00:49:30 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:30.991374 | orchestrator | 2026-02-04 00:49:30 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:30.991419 | orchestrator | 2026-02-04 00:49:30 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:30.991461 | orchestrator | 2026-02-04 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:34.048723 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:34.049860 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:34.053759 | orchestrator | 2026-02-04 00:49:34 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:34.053803 | orchestrator | 2026-02-04 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:37.083287 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:37.083774 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:37.084904 | orchestrator | 2026-02-04 00:49:37 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:37.084940 | orchestrator | 2026-02-04 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:40.124897 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:40.125139 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:40.126678 | orchestrator | 2026-02-04 00:49:40 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:40.126741 | orchestrator | 2026-02-04 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:43.164337 | orchestrator | 2026-02-04 00:49:43 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:43.164423 | orchestrator | 2026-02-04 00:49:43 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:43.167317 | orchestrator | 2026-02-04 00:49:43 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:43.167416 | orchestrator | 2026-02-04 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:46.192591 | orchestrator | 2026-02-04 00:49:46 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:46.193644 | orchestrator | 2026-02-04 00:49:46 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:46.194891 | orchestrator | 2026-02-04 00:49:46 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:46.194951 | orchestrator | 2026-02-04 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:49.255411 | orchestrator | 2026-02-04 00:49:49 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:49.255603 | orchestrator | 2026-02-04 00:49:49 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:49.255695 | orchestrator | 2026-02-04 00:49:49 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:49.255706 | orchestrator | 2026-02-04 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:52.286671 | orchestrator | 2026-02-04 00:49:52 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:52.286874 | orchestrator | 2026-02-04 00:49:52 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state STARTED 2026-02-04 00:49:52.287442 | orchestrator | 2026-02-04 00:49:52 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:52.287495 | orchestrator | 2026-02-04 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:55.319210 | orchestrator | 2026-02-04 00:49:55 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:55.319862 | orchestrator | 2026-02-04 00:49:55 | INFO  | Task 6a5ac950-7b5d-4122-99cf-8a51071c9bf9 is in state SUCCESS 2026-02-04 00:49:55.321524 | orchestrator | 2026-02-04 00:49:55.321566 | orchestrator | 2026-02-04 00:49:55.321573 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:49:55.321578 | orchestrator | 2026-02-04 00:49:55.321583 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:49:55.321588 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:00.250) 0:00:00.250 **** 2026-02-04 00:49:55.321592 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:49:55.321597 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:49:55.321601 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:49:55.321605 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.321609 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.321613 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.321617 | orchestrator | 2026-02-04 00:49:55.321621 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:49:55.321625 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:00.872) 0:00:01.123 **** 2026-02-04 00:49:55.321629 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-04 00:49:55.321634 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-04 00:49:55.321638 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-04 00:49:55.321641 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-04 00:49:55.321645 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-04 00:49:55.321649 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-04 00:49:55.321653 | orchestrator | 2026-02-04 00:49:55.321657 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-04 00:49:55.321661 | orchestrator | 2026-02-04 00:49:55.321674 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-04 00:49:55.321695 | orchestrator | Wednesday 04 February 2026 00:47:40 +0000 (0:00:01.263) 0:00:02.386 **** 2026-02-04 00:49:55.321701 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:55.321706 | orchestrator | 2026-02-04 00:49:55.321709 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-04 00:49:55.321713 | orchestrator | Wednesday 04 February 2026 00:47:41 +0000 (0:00:00.865) 0:00:03.252 **** 2026-02-04 00:49:55.321719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321764 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321769 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321819 | orchestrator | 2026-02-04 00:49:55.321829 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-04 00:49:55.321834 | orchestrator | Wednesday 04 February 2026 00:47:42 +0000 (0:00:01.329) 0:00:04.582 **** 2026-02-04 00:49:55.321837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321845 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321869 | orchestrator | 2026-02-04 00:49:55.321873 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-04 00:49:55.321877 | orchestrator | Wednesday 04 February 2026 00:47:44 +0000 (0:00:02.268) 0:00:06.851 **** 2026-02-04 00:49:55.321881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321912 | orchestrator | 2026-02-04 00:49:55.321934 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-04 00:49:55.321938 | orchestrator | Wednesday 04 February 2026 00:47:46 +0000 (0:00:01.832) 0:00:08.683 **** 2026-02-04 00:49:55.321942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321974 | orchestrator | 2026-02-04 00:49:55.321978 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-04 00:49:55.321982 | orchestrator | Wednesday 04 February 2026 00:47:47 +0000 (0:00:01.400) 0:00:10.083 **** 2026-02-04 00:49:55.321985 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.321996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.322007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.322039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.322043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.322047 | orchestrator | 2026-02-04 00:49:55.322051 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-04 00:49:55.322055 | orchestrator | Wednesday 04 February 2026 00:47:49 +0000 (0:00:01.428) 0:00:11.512 **** 2026-02-04 00:49:55.322059 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.322064 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:49:55.322068 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:49:55.322072 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:49:55.322076 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.322080 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.322085 | orchestrator | 2026-02-04 00:49:55.322089 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-04 00:49:55.322094 | orchestrator | Wednesday 04 February 2026 00:47:51 +0000 (0:00:02.556) 0:00:14.068 **** 2026-02-04 00:49:55.322098 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-04 00:49:55.322103 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-04 00:49:55.322107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-04 00:49:55.322114 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-04 00:49:55.322119 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-04 00:49:55.322123 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-04 00:49:55.322128 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:49:55.322133 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:49:55.322137 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:49:55.322141 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:49:55.322145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:49:55.322150 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 00:49:55.322154 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:49:55.322165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:49:55.322173 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:49:55.322178 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:49:55.322184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:49:55.322190 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 00:49:55.322196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:49:55.322203 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:49:55.322211 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:49:55.322220 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:49:55.322226 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:49:55.322232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 00:49:55.322238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:49:55.322244 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:49:55.322250 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:49:55.322256 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:49:55.322262 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:49:55.322269 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 00:49:55.322275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:49:55.322281 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:49:55.322287 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:49:55.322293 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:49:55.322300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:49:55.322306 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 00:49:55.322312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 00:49:55.322318 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 00:49:55.322322 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 00:49:55.322326 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 00:49:55.322334 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 00:49:55.322338 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 00:49:55.322346 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-04 00:49:55.322351 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-04 00:49:55.322355 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-04 00:49:55.322359 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-04 00:49:55.322362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-04 00:49:55.322366 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-04 00:49:55.322373 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 00:49:55.322377 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 00:49:55.322380 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 00:49:55.322384 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 00:49:55.322388 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 00:49:55.322392 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 00:49:55.322396 | orchestrator | 2026-02-04 00:49:55.322400 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:49:55.322403 | orchestrator | Wednesday 04 February 2026 00:48:12 +0000 (0:00:20.552) 0:00:34.621 **** 2026-02-04 00:49:55.322407 | orchestrator | 2026-02-04 00:49:55.322411 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:49:55.322415 | orchestrator | Wednesday 04 February 2026 00:48:12 +0000 (0:00:00.072) 0:00:34.694 **** 2026-02-04 00:49:55.322419 | orchestrator | 2026-02-04 00:49:55.322423 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:49:55.322429 | orchestrator | Wednesday 04 February 2026 00:48:12 +0000 (0:00:00.068) 0:00:34.763 **** 2026-02-04 00:49:55.322435 | orchestrator | 2026-02-04 00:49:55.322441 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:49:55.322446 | orchestrator | Wednesday 04 February 2026 00:48:12 +0000 (0:00:00.071) 0:00:34.834 **** 2026-02-04 00:49:55.322451 | orchestrator | 2026-02-04 00:49:55.322456 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:49:55.322461 | orchestrator | Wednesday 04 February 2026 00:48:12 +0000 (0:00:00.189) 0:00:35.023 **** 2026-02-04 00:49:55.322466 | orchestrator | 2026-02-04 00:49:55.322530 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 00:49:55.322538 | orchestrator | Wednesday 04 February 2026 00:48:12 +0000 (0:00:00.098) 0:00:35.121 **** 2026-02-04 00:49:55.322544 | orchestrator | 2026-02-04 00:49:55.322550 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-04 00:49:55.322556 | orchestrator | Wednesday 04 February 2026 00:48:13 +0000 (0:00:00.071) 0:00:35.193 **** 2026-02-04 00:49:55.322562 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.322567 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:49:55.322573 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:49:55.322579 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:49:55.322585 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.322591 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.322602 | orchestrator | 2026-02-04 00:49:55.322607 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-04 00:49:55.322613 | orchestrator | Wednesday 04 February 2026 00:48:14 +0000 (0:00:01.751) 0:00:36.945 **** 2026-02-04 00:49:55.322619 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.322625 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:49:55.322631 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.322637 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:49:55.322643 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.322648 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:49:55.322654 | orchestrator | 2026-02-04 00:49:55.322659 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-04 00:49:55.322665 | orchestrator | 2026-02-04 00:49:55.322671 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 00:49:55.322676 | orchestrator | Wednesday 04 February 2026 00:48:41 +0000 (0:00:26.243) 0:01:03.189 **** 2026-02-04 00:49:55.322683 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:55.322689 | orchestrator | 2026-02-04 00:49:55.322695 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 00:49:55.322702 | orchestrator | Wednesday 04 February 2026 00:48:42 +0000 (0:00:01.518) 0:01:04.707 **** 2026-02-04 00:49:55.322708 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:55.322714 | orchestrator | 2026-02-04 00:49:55.322724 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-04 00:49:55.322731 | orchestrator | Wednesday 04 February 2026 00:48:43 +0000 (0:00:00.894) 0:01:05.602 **** 2026-02-04 00:49:55.322737 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.322742 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.322748 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.322754 | orchestrator | 2026-02-04 00:49:55.322760 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-04 00:49:55.322766 | orchestrator | Wednesday 04 February 2026 00:48:44 +0000 (0:00:01.264) 0:01:06.866 **** 2026-02-04 00:49:55.322772 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.322778 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.322784 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.322790 | orchestrator | 2026-02-04 00:49:55.322797 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-04 00:49:55.322803 | orchestrator | Wednesday 04 February 2026 00:48:44 +0000 (0:00:00.240) 0:01:07.107 **** 2026-02-04 00:49:55.322809 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.322816 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.322822 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.322828 | orchestrator | 2026-02-04 00:49:55.322834 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-04 00:49:55.322840 | orchestrator | Wednesday 04 February 2026 00:48:45 +0000 (0:00:00.316) 0:01:07.423 **** 2026-02-04 00:49:55.322846 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.322852 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.322858 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.322865 | orchestrator | 2026-02-04 00:49:55.322871 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-04 00:49:55.322882 | orchestrator | Wednesday 04 February 2026 00:48:45 +0000 (0:00:00.364) 0:01:07.788 **** 2026-02-04 00:49:55.322890 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.322895 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.322902 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.322908 | orchestrator | 2026-02-04 00:49:55.322914 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-04 00:49:55.322921 | orchestrator | Wednesday 04 February 2026 00:48:46 +0000 (0:00:00.503) 0:01:08.291 **** 2026-02-04 00:49:55.322927 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.322933 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.322946 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.322949 | orchestrator | 2026-02-04 00:49:55.322953 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-04 00:49:55.322957 | orchestrator | Wednesday 04 February 2026 00:48:46 +0000 (0:00:00.257) 0:01:08.548 **** 2026-02-04 00:49:55.322961 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.322965 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.322969 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.322973 | orchestrator | 2026-02-04 00:49:55.322977 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-04 00:49:55.322980 | orchestrator | Wednesday 04 February 2026 00:48:46 +0000 (0:00:00.267) 0:01:08.816 **** 2026-02-04 00:49:55.322984 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.322988 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.322992 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.322996 | orchestrator | 2026-02-04 00:49:55.323000 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-04 00:49:55.323004 | orchestrator | Wednesday 04 February 2026 00:48:46 +0000 (0:00:00.266) 0:01:09.082 **** 2026-02-04 00:49:55.323008 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323011 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323015 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323019 | orchestrator | 2026-02-04 00:49:55.323023 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-04 00:49:55.323027 | orchestrator | Wednesday 04 February 2026 00:48:47 +0000 (0:00:00.419) 0:01:09.502 **** 2026-02-04 00:49:55.323030 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323034 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323038 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323042 | orchestrator | 2026-02-04 00:49:55.323046 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-04 00:49:55.323049 | orchestrator | Wednesday 04 February 2026 00:48:47 +0000 (0:00:00.390) 0:01:09.893 **** 2026-02-04 00:49:55.323053 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323057 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323061 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323065 | orchestrator | 2026-02-04 00:49:55.323069 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-04 00:49:55.323072 | orchestrator | Wednesday 04 February 2026 00:48:47 +0000 (0:00:00.265) 0:01:10.158 **** 2026-02-04 00:49:55.323076 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323080 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323084 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323088 | orchestrator | 2026-02-04 00:49:55.323092 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-04 00:49:55.323096 | orchestrator | Wednesday 04 February 2026 00:48:48 +0000 (0:00:00.235) 0:01:10.393 **** 2026-02-04 00:49:55.323100 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323104 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323108 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323111 | orchestrator | 2026-02-04 00:49:55.323115 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-04 00:49:55.323119 | orchestrator | Wednesday 04 February 2026 00:48:48 +0000 (0:00:00.343) 0:01:10.737 **** 2026-02-04 00:49:55.323124 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323131 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323138 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323143 | orchestrator | 2026-02-04 00:49:55.323149 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-04 00:49:55.323156 | orchestrator | Wednesday 04 February 2026 00:48:48 +0000 (0:00:00.308) 0:01:11.045 **** 2026-02-04 00:49:55.323162 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323168 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323179 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323185 | orchestrator | 2026-02-04 00:49:55.323196 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-04 00:49:55.323203 | orchestrator | Wednesday 04 February 2026 00:48:49 +0000 (0:00:00.520) 0:01:11.566 **** 2026-02-04 00:49:55.323209 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323215 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323222 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323228 | orchestrator | 2026-02-04 00:49:55.323234 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-04 00:49:55.323240 | orchestrator | Wednesday 04 February 2026 00:48:49 +0000 (0:00:00.545) 0:01:12.111 **** 2026-02-04 00:49:55.323246 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323251 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323257 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323263 | orchestrator | 2026-02-04 00:49:55.323268 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 00:49:55.323274 | orchestrator | Wednesday 04 February 2026 00:48:50 +0000 (0:00:00.520) 0:01:12.632 **** 2026-02-04 00:49:55.323279 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:49:55.323286 | orchestrator | 2026-02-04 00:49:55.323291 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-04 00:49:55.323297 | orchestrator | Wednesday 04 February 2026 00:48:51 +0000 (0:00:00.909) 0:01:13.542 **** 2026-02-04 00:49:55.323303 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.323309 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.323316 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.323322 | orchestrator | 2026-02-04 00:49:55.323333 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-04 00:49:55.323340 | orchestrator | Wednesday 04 February 2026 00:48:51 +0000 (0:00:00.430) 0:01:13.972 **** 2026-02-04 00:49:55.323346 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.323352 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.323358 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.323364 | orchestrator | 2026-02-04 00:49:55.323369 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-04 00:49:55.323372 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:00.341) 0:01:14.313 **** 2026-02-04 00:49:55.323376 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323380 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323384 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323388 | orchestrator | 2026-02-04 00:49:55.323392 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-04 00:49:55.323396 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:00.437) 0:01:14.750 **** 2026-02-04 00:49:55.323399 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323403 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323407 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323411 | orchestrator | 2026-02-04 00:49:55.323415 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-04 00:49:55.323419 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:00.344) 0:01:15.095 **** 2026-02-04 00:49:55.323423 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323427 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323431 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323435 | orchestrator | 2026-02-04 00:49:55.323439 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-04 00:49:55.323443 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:00.415) 0:01:15.510 **** 2026-02-04 00:49:55.323447 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323451 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323454 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323458 | orchestrator | 2026-02-04 00:49:55.323468 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-04 00:49:55.323496 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:00.513) 0:01:16.024 **** 2026-02-04 00:49:55.323502 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323505 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323509 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323513 | orchestrator | 2026-02-04 00:49:55.323517 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-04 00:49:55.323521 | orchestrator | Wednesday 04 February 2026 00:48:54 +0000 (0:00:00.715) 0:01:16.739 **** 2026-02-04 00:49:55.323525 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323529 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323532 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323536 | orchestrator | 2026-02-04 00:49:55.323540 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 00:49:55.323544 | orchestrator | Wednesday 04 February 2026 00:48:54 +0000 (0:00:00.399) 0:01:17.139 **** 2026-02-04 00:49:55.323549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323606 | orchestrator | 2026-02-04 00:49:55.323610 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 00:49:55.323614 | orchestrator | Wednesday 04 February 2026 00:48:56 +0000 (0:00:01.484) 0:01:18.623 **** 2026-02-04 00:49:55.323619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323663 | orchestrator | 2026-02-04 00:49:55.323667 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-04 00:49:55.323671 | orchestrator | Wednesday 04 February 2026 00:49:00 +0000 (0:00:04.071) 0:01:22.695 **** 2026-02-04 00:49:55.323675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.323787 | orchestrator | 2026-02-04 00:49:55.323791 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:49:55.323795 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:02.518) 0:01:25.213 **** 2026-02-04 00:49:55.323799 | orchestrator | 2026-02-04 00:49:55.323803 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:49:55.323807 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:00.060) 0:01:25.273 **** 2026-02-04 00:49:55.323811 | orchestrator | 2026-02-04 00:49:55.323815 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:49:55.323819 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:00.057) 0:01:25.331 **** 2026-02-04 00:49:55.323823 | orchestrator | 2026-02-04 00:49:55.323826 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 00:49:55.323830 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:00.057) 0:01:25.389 **** 2026-02-04 00:49:55.323834 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.323838 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.323842 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.323846 | orchestrator | 2026-02-04 00:49:55.323850 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 00:49:55.323854 | orchestrator | Wednesday 04 February 2026 00:49:05 +0000 (0:00:02.266) 0:01:27.655 **** 2026-02-04 00:49:55.323858 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.323862 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.323865 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.323869 | orchestrator | 2026-02-04 00:49:55.323873 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 00:49:55.323877 | orchestrator | Wednesday 04 February 2026 00:49:12 +0000 (0:00:07.454) 0:01:35.110 **** 2026-02-04 00:49:55.323881 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.323885 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.323889 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.323893 | orchestrator | 2026-02-04 00:49:55.323897 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 00:49:55.323900 | orchestrator | Wednesday 04 February 2026 00:49:15 +0000 (0:00:02.812) 0:01:37.922 **** 2026-02-04 00:49:55.323904 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.323908 | orchestrator | 2026-02-04 00:49:55.323912 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 00:49:55.323916 | orchestrator | Wednesday 04 February 2026 00:49:16 +0000 (0:00:00.317) 0:01:38.240 **** 2026-02-04 00:49:55.323920 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.323925 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.323931 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.323938 | orchestrator | 2026-02-04 00:49:55.323948 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 00:49:55.323959 | orchestrator | Wednesday 04 February 2026 00:49:16 +0000 (0:00:00.804) 0:01:39.044 **** 2026-02-04 00:49:55.323966 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.323972 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.323978 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.323984 | orchestrator | 2026-02-04 00:49:55.323991 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 00:49:55.323997 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:00.641) 0:01:39.686 **** 2026-02-04 00:49:55.324003 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324009 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324016 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324022 | orchestrator | 2026-02-04 00:49:55.324028 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 00:49:55.324035 | orchestrator | Wednesday 04 February 2026 00:49:18 +0000 (0:00:00.764) 0:01:40.451 **** 2026-02-04 00:49:55.324042 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.324049 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.324055 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.324062 | orchestrator | 2026-02-04 00:49:55.324068 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 00:49:55.324074 | orchestrator | Wednesday 04 February 2026 00:49:18 +0000 (0:00:00.672) 0:01:41.123 **** 2026-02-04 00:49:55.324081 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324087 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324094 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324101 | orchestrator | 2026-02-04 00:49:55.324107 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 00:49:55.324119 | orchestrator | Wednesday 04 February 2026 00:49:20 +0000 (0:00:01.616) 0:01:42.739 **** 2026-02-04 00:49:55.324127 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324133 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324139 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324145 | orchestrator | 2026-02-04 00:49:55.324152 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-04 00:49:55.324157 | orchestrator | Wednesday 04 February 2026 00:49:21 +0000 (0:00:01.065) 0:01:43.805 **** 2026-02-04 00:49:55.324161 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324165 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324169 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324173 | orchestrator | 2026-02-04 00:49:55.324177 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 00:49:55.324181 | orchestrator | Wednesday 04 February 2026 00:49:21 +0000 (0:00:00.303) 0:01:44.109 **** 2026-02-04 00:49:55.324185 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324194 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324198 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324206 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324266 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324280 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324284 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324292 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324296 | orchestrator | 2026-02-04 00:49:55.324300 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 00:49:55.324304 | orchestrator | Wednesday 04 February 2026 00:49:23 +0000 (0:00:01.482) 0:01:45.591 **** 2026-02-04 00:49:55.324308 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324313 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324317 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324321 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324340 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324353 | orchestrator | 2026-02-04 00:49:55.324357 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-04 00:49:55.324361 | orchestrator | Wednesday 04 February 2026 00:49:27 +0000 (0:00:03.878) 0:01:49.470 **** 2026-02-04 00:49:55.324369 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324373 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324393 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324411 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 00:49:55.324415 | orchestrator | 2026-02-04 00:49:55.324418 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:49:55.324423 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:03.110) 0:01:52.580 **** 2026-02-04 00:49:55.324426 | orchestrator | 2026-02-04 00:49:55.324430 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:49:55.324434 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:00.072) 0:01:52.653 **** 2026-02-04 00:49:55.324438 | orchestrator | 2026-02-04 00:49:55.324442 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 00:49:55.324449 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:00.088) 0:01:52.742 **** 2026-02-04 00:49:55.324453 | orchestrator | 2026-02-04 00:49:55.324457 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 00:49:55.324461 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:00.099) 0:01:52.841 **** 2026-02-04 00:49:55.324465 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.324469 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.324592 | orchestrator | 2026-02-04 00:49:55.324601 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 00:49:55.324606 | orchestrator | Wednesday 04 February 2026 00:49:37 +0000 (0:00:06.530) 0:01:59.372 **** 2026-02-04 00:49:55.324615 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.324619 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.324624 | orchestrator | 2026-02-04 00:49:55.324627 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 00:49:55.324632 | orchestrator | Wednesday 04 February 2026 00:49:43 +0000 (0:00:06.212) 0:02:05.584 **** 2026-02-04 00:49:55.324636 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:49:55.324640 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:49:55.324644 | orchestrator | 2026-02-04 00:49:55.324647 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 00:49:55.324651 | orchestrator | Wednesday 04 February 2026 00:49:49 +0000 (0:00:06.505) 0:02:12.090 **** 2026-02-04 00:49:55.324655 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:49:55.324659 | orchestrator | 2026-02-04 00:49:55.324663 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 00:49:55.324667 | orchestrator | Wednesday 04 February 2026 00:49:50 +0000 (0:00:00.133) 0:02:12.223 **** 2026-02-04 00:49:55.324671 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324675 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324680 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324683 | orchestrator | 2026-02-04 00:49:55.324687 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 00:49:55.324691 | orchestrator | Wednesday 04 February 2026 00:49:50 +0000 (0:00:00.865) 0:02:13.089 **** 2026-02-04 00:49:55.324695 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.324699 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.324703 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.324707 | orchestrator | 2026-02-04 00:49:55.324711 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 00:49:55.324716 | orchestrator | Wednesday 04 February 2026 00:49:51 +0000 (0:00:00.695) 0:02:13.785 **** 2026-02-04 00:49:55.324722 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324728 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324734 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324740 | orchestrator | 2026-02-04 00:49:55.324746 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 00:49:55.324751 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:00.711) 0:02:14.497 **** 2026-02-04 00:49:55.324758 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:49:55.324764 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:49:55.324770 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:49:55.324776 | orchestrator | 2026-02-04 00:49:55.324782 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 00:49:55.324788 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:00.650) 0:02:15.148 **** 2026-02-04 00:49:55.324795 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324801 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324808 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324814 | orchestrator | 2026-02-04 00:49:55.324820 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 00:49:55.324826 | orchestrator | Wednesday 04 February 2026 00:49:53 +0000 (0:00:00.727) 0:02:15.875 **** 2026-02-04 00:49:55.324833 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:49:55.324837 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:49:55.324841 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:49:55.324845 | orchestrator | 2026-02-04 00:49:55.324849 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:49:55.324853 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 00:49:55.324859 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-04 00:49:55.324871 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-04 00:49:55.324881 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:55.324886 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:55.324890 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:49:55.324894 | orchestrator | 2026-02-04 00:49:55.324898 | orchestrator | 2026-02-04 00:49:55.324902 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:49:55.324906 | orchestrator | Wednesday 04 February 2026 00:49:54 +0000 (0:00:00.985) 0:02:16.861 **** 2026-02-04 00:49:55.324910 | orchestrator | =============================================================================== 2026-02-04 00:49:55.324914 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.24s 2026-02-04 00:49:55.324918 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.55s 2026-02-04 00:49:55.324922 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.67s 2026-02-04 00:49:55.324930 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.32s 2026-02-04 00:49:55.324935 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.80s 2026-02-04 00:49:55.324939 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.07s 2026-02-04 00:49:55.324943 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.88s 2026-02-04 00:49:55.324947 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.11s 2026-02-04 00:49:55.324951 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.56s 2026-02-04 00:49:55.324955 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.52s 2026-02-04 00:49:55.324959 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.27s 2026-02-04 00:49:55.324963 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.83s 2026-02-04 00:49:55.324968 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.75s 2026-02-04 00:49:55.324972 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.62s 2026-02-04 00:49:55.324976 | orchestrator | ovn-db : include_tasks -------------------------------------------------- 1.52s 2026-02-04 00:49:55.324980 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-02-04 00:49:55.324984 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.48s 2026-02-04 00:49:55.324988 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.43s 2026-02-04 00:49:55.324992 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.40s 2026-02-04 00:49:55.324996 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2026-02-04 00:49:55.325000 | orchestrator | 2026-02-04 00:49:55 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:55.325005 | orchestrator | 2026-02-04 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:49:58.355988 | orchestrator | 2026-02-04 00:49:58 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:49:58.357550 | orchestrator | 2026-02-04 00:49:58 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:49:58.357739 | orchestrator | 2026-02-04 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:01.397608 | orchestrator | 2026-02-04 00:50:01 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:01.398060 | orchestrator | 2026-02-04 00:50:01 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:01.398716 | orchestrator | 2026-02-04 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:04.425905 | orchestrator | 2026-02-04 00:50:04 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:04.427039 | orchestrator | 2026-02-04 00:50:04 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:04.427205 | orchestrator | 2026-02-04 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:07.470254 | orchestrator | 2026-02-04 00:50:07 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:07.470610 | orchestrator | 2026-02-04 00:50:07 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:07.472013 | orchestrator | 2026-02-04 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:10.504414 | orchestrator | 2026-02-04 00:50:10 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:10.505118 | orchestrator | 2026-02-04 00:50:10 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:10.505162 | orchestrator | 2026-02-04 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:13.535383 | orchestrator | 2026-02-04 00:50:13 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:13.535893 | orchestrator | 2026-02-04 00:50:13 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:13.535982 | orchestrator | 2026-02-04 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:16.564340 | orchestrator | 2026-02-04 00:50:16 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:16.566210 | orchestrator | 2026-02-04 00:50:16 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:16.566281 | orchestrator | 2026-02-04 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:19.619325 | orchestrator | 2026-02-04 00:50:19 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:19.620212 | orchestrator | 2026-02-04 00:50:19 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:19.620316 | orchestrator | 2026-02-04 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:22.663757 | orchestrator | 2026-02-04 00:50:22 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:22.666550 | orchestrator | 2026-02-04 00:50:22 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:22.666636 | orchestrator | 2026-02-04 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:25.714093 | orchestrator | 2026-02-04 00:50:25 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:25.716172 | orchestrator | 2026-02-04 00:50:25 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:25.716251 | orchestrator | 2026-02-04 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:28.749276 | orchestrator | 2026-02-04 00:50:28 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:28.750162 | orchestrator | 2026-02-04 00:50:28 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:28.750286 | orchestrator | 2026-02-04 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:31.784512 | orchestrator | 2026-02-04 00:50:31 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:31.785422 | orchestrator | 2026-02-04 00:50:31 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:31.785449 | orchestrator | 2026-02-04 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:34.827027 | orchestrator | 2026-02-04 00:50:34 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:34.828942 | orchestrator | 2026-02-04 00:50:34 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:34.829168 | orchestrator | 2026-02-04 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:37.854872 | orchestrator | 2026-02-04 00:50:37 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:37.854920 | orchestrator | 2026-02-04 00:50:37 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:37.854927 | orchestrator | 2026-02-04 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:40.896190 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:40.896783 | orchestrator | 2026-02-04 00:50:40 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:40.896994 | orchestrator | 2026-02-04 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:43.943492 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:43.944942 | orchestrator | 2026-02-04 00:50:43 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:43.945023 | orchestrator | 2026-02-04 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:46.984586 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:46.985062 | orchestrator | 2026-02-04 00:50:46 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:46.985099 | orchestrator | 2026-02-04 00:50:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:50.017501 | orchestrator | 2026-02-04 00:50:50 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:50.017749 | orchestrator | 2026-02-04 00:50:50 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:50.017875 | orchestrator | 2026-02-04 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:53.053219 | orchestrator | 2026-02-04 00:50:53 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:53.053303 | orchestrator | 2026-02-04 00:50:53 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:53.053315 | orchestrator | 2026-02-04 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:56.098608 | orchestrator | 2026-02-04 00:50:56 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:56.098902 | orchestrator | 2026-02-04 00:50:56 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:56.098944 | orchestrator | 2026-02-04 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:50:59.126992 | orchestrator | 2026-02-04 00:50:59 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:50:59.127150 | orchestrator | 2026-02-04 00:50:59 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:50:59.127242 | orchestrator | 2026-02-04 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:02.163078 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:02.163731 | orchestrator | 2026-02-04 00:51:02 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:02.164940 | orchestrator | 2026-02-04 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:05.228486 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:05.228878 | orchestrator | 2026-02-04 00:51:05 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:05.228898 | orchestrator | 2026-02-04 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:08.266682 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:08.267733 | orchestrator | 2026-02-04 00:51:08 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:08.267827 | orchestrator | 2026-02-04 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:11.309945 | orchestrator | 2026-02-04 00:51:11 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:11.313002 | orchestrator | 2026-02-04 00:51:11 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:11.313067 | orchestrator | 2026-02-04 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:14.349974 | orchestrator | 2026-02-04 00:51:14 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:14.351278 | orchestrator | 2026-02-04 00:51:14 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:14.351323 | orchestrator | 2026-02-04 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:17.389094 | orchestrator | 2026-02-04 00:51:17 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:17.391271 | orchestrator | 2026-02-04 00:51:17 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:17.391343 | orchestrator | 2026-02-04 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:20.420594 | orchestrator | 2026-02-04 00:51:20 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:20.422571 | orchestrator | 2026-02-04 00:51:20 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:20.422660 | orchestrator | 2026-02-04 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:23.462413 | orchestrator | 2026-02-04 00:51:23 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:23.465172 | orchestrator | 2026-02-04 00:51:23 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:23.465896 | orchestrator | 2026-02-04 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:26.501647 | orchestrator | 2026-02-04 00:51:26 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:26.502071 | orchestrator | 2026-02-04 00:51:26 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:26.502106 | orchestrator | 2026-02-04 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:29.545325 | orchestrator | 2026-02-04 00:51:29 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:29.547377 | orchestrator | 2026-02-04 00:51:29 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:29.547479 | orchestrator | 2026-02-04 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:32.590687 | orchestrator | 2026-02-04 00:51:32 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:32.593722 | orchestrator | 2026-02-04 00:51:32 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:32.593763 | orchestrator | 2026-02-04 00:51:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:35.628867 | orchestrator | 2026-02-04 00:51:35 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:35.629481 | orchestrator | 2026-02-04 00:51:35 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:35.629546 | orchestrator | 2026-02-04 00:51:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:38.666206 | orchestrator | 2026-02-04 00:51:38 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:38.667094 | orchestrator | 2026-02-04 00:51:38 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:38.667129 | orchestrator | 2026-02-04 00:51:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:41.702783 | orchestrator | 2026-02-04 00:51:41 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:41.703140 | orchestrator | 2026-02-04 00:51:41 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:41.703153 | orchestrator | 2026-02-04 00:51:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:44.738288 | orchestrator | 2026-02-04 00:51:44 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:44.739046 | orchestrator | 2026-02-04 00:51:44 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:44.739075 | orchestrator | 2026-02-04 00:51:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:47.779472 | orchestrator | 2026-02-04 00:51:47 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:47.782217 | orchestrator | 2026-02-04 00:51:47 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:47.782270 | orchestrator | 2026-02-04 00:51:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:50.822131 | orchestrator | 2026-02-04 00:51:50 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:50.823680 | orchestrator | 2026-02-04 00:51:50 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:50.823978 | orchestrator | 2026-02-04 00:51:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:53.862948 | orchestrator | 2026-02-04 00:51:53 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:53.863577 | orchestrator | 2026-02-04 00:51:53 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:53.863607 | orchestrator | 2026-02-04 00:51:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:56.907802 | orchestrator | 2026-02-04 00:51:56 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:56.909173 | orchestrator | 2026-02-04 00:51:56 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:56.909230 | orchestrator | 2026-02-04 00:51:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:51:59.953137 | orchestrator | 2026-02-04 00:51:59 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:51:59.955027 | orchestrator | 2026-02-04 00:51:59 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:51:59.955084 | orchestrator | 2026-02-04 00:51:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:02.993766 | orchestrator | 2026-02-04 00:52:02 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:02.995574 | orchestrator | 2026-02-04 00:52:02 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:02.995608 | orchestrator | 2026-02-04 00:52:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:06.029833 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:06.031673 | orchestrator | 2026-02-04 00:52:06 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:06.031945 | orchestrator | 2026-02-04 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:09.068663 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:09.068776 | orchestrator | 2026-02-04 00:52:09 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:09.068787 | orchestrator | 2026-02-04 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:12.109026 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:12.111087 | orchestrator | 2026-02-04 00:52:12 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:12.111133 | orchestrator | 2026-02-04 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:15.157132 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:15.159007 | orchestrator | 2026-02-04 00:52:15 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:15.159082 | orchestrator | 2026-02-04 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:18.203287 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:18.206291 | orchestrator | 2026-02-04 00:52:18 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:18.206369 | orchestrator | 2026-02-04 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:21.236687 | orchestrator | 2026-02-04 00:52:21 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:21.237774 | orchestrator | 2026-02-04 00:52:21 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:21.237912 | orchestrator | 2026-02-04 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:24.278337 | orchestrator | 2026-02-04 00:52:24 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:24.280131 | orchestrator | 2026-02-04 00:52:24 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:24.280209 | orchestrator | 2026-02-04 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:27.312987 | orchestrator | 2026-02-04 00:52:27 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:27.314599 | orchestrator | 2026-02-04 00:52:27 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:27.314743 | orchestrator | 2026-02-04 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:30.354767 | orchestrator | 2026-02-04 00:52:30 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state STARTED 2026-02-04 00:52:30.358256 | orchestrator | 2026-02-04 00:52:30 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:30.358328 | orchestrator | 2026-02-04 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:33.405224 | orchestrator | 2026-02-04 00:52:33 | INFO  | Task 836470c2-5f07-49fb-ad60-5e325a088f5a is in state SUCCESS 2026-02-04 00:52:33.406211 | orchestrator | 2026-02-04 00:52:33.406253 | orchestrator | 2026-02-04 00:52:33.406262 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:52:33.406271 | orchestrator | 2026-02-04 00:52:33.406279 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:52:33.406288 | orchestrator | Wednesday 04 February 2026 00:46:34 +0000 (0:00:00.243) 0:00:00.243 **** 2026-02-04 00:52:33.406295 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.406305 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.406313 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.406321 | orchestrator | 2026-02-04 00:52:33.406330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:52:33.406337 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.338) 0:00:00.582 **** 2026-02-04 00:52:33.406345 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-04 00:52:33.406353 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-04 00:52:33.406403 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-04 00:52:33.406420 | orchestrator | 2026-02-04 00:52:33.406428 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-04 00:52:33.406436 | orchestrator | 2026-02-04 00:52:33.406443 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 00:52:33.406451 | orchestrator | Wednesday 04 February 2026 00:46:35 +0000 (0:00:00.424) 0:00:01.006 **** 2026-02-04 00:52:33.406459 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.406464 | orchestrator | 2026-02-04 00:52:33.406497 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-04 00:52:33.406503 | orchestrator | Wednesday 04 February 2026 00:46:36 +0000 (0:00:00.609) 0:00:01.616 **** 2026-02-04 00:52:33.406508 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.406514 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.406519 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.406524 | orchestrator | 2026-02-04 00:52:33.406529 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 00:52:33.406534 | orchestrator | Wednesday 04 February 2026 00:46:37 +0000 (0:00:01.544) 0:00:03.160 **** 2026-02-04 00:52:33.406539 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.406544 | orchestrator | 2026-02-04 00:52:33.406549 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-04 00:52:33.406568 | orchestrator | Wednesday 04 February 2026 00:46:38 +0000 (0:00:00.726) 0:00:03.887 **** 2026-02-04 00:52:33.406574 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.406598 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.406604 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.406609 | orchestrator | 2026-02-04 00:52:33.406614 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-04 00:52:33.406619 | orchestrator | Wednesday 04 February 2026 00:46:39 +0000 (0:00:00.729) 0:00:04.616 **** 2026-02-04 00:52:33.406624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:52:33.406629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:52:33.406634 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:52:33.406729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:52:33.406738 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:52:33.406743 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 00:52:33.406749 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 00:52:33.406754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 00:52:33.406759 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 00:52:33.406764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 00:52:33.406769 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 00:52:33.406773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 00:52:33.406778 | orchestrator | 2026-02-04 00:52:33.406783 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 00:52:33.406788 | orchestrator | Wednesday 04 February 2026 00:46:43 +0000 (0:00:04.672) 0:00:09.288 **** 2026-02-04 00:52:33.406793 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-04 00:52:33.406798 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-04 00:52:33.406803 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-04 00:52:33.406808 | orchestrator | 2026-02-04 00:52:33.406814 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 00:52:33.406820 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:01.485) 0:00:10.774 **** 2026-02-04 00:52:33.406826 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-04 00:52:33.406831 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-04 00:52:33.406837 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-04 00:52:33.406843 | orchestrator | 2026-02-04 00:52:33.406849 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 00:52:33.406854 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:01.860) 0:00:12.634 **** 2026-02-04 00:52:33.406860 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-04 00:52:33.406867 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.406885 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-04 00:52:33.406897 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.406909 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-04 00:52:33.406917 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.406924 | orchestrator | 2026-02-04 00:52:33.406932 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-04 00:52:33.406940 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.611) 0:00:13.246 **** 2026-02-04 00:52:33.406951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.406966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.406990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.406999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.407045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.407059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.407067 | orchestrator | 2026-02-04 00:52:33.407079 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-04 00:52:33.407088 | orchestrator | Wednesday 04 February 2026 00:46:49 +0000 (0:00:02.004) 0:00:15.250 **** 2026-02-04 00:52:33.407097 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.407105 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.407114 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.407122 | orchestrator | 2026-02-04 00:52:33.407132 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-04 00:52:33.407139 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:01.447) 0:00:16.698 **** 2026-02-04 00:52:33.407148 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-04 00:52:33.407156 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-04 00:52:33.407164 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-04 00:52:33.407172 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-04 00:52:33.407180 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-04 00:52:33.407189 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-04 00:52:33.407197 | orchestrator | 2026-02-04 00:52:33.407205 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-04 00:52:33.407213 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:01.995) 0:00:18.693 **** 2026-02-04 00:52:33.407221 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.407229 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.407237 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.407245 | orchestrator | 2026-02-04 00:52:33.407277 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-04 00:52:33.407286 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:02.354) 0:00:21.048 **** 2026-02-04 00:52:33.407294 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.407303 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.407311 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.407318 | orchestrator | 2026-02-04 00:52:33.407326 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-04 00:52:33.407334 | orchestrator | Wednesday 04 February 2026 00:46:57 +0000 (0:00:01.464) 0:00:22.512 **** 2026-02-04 00:52:33.407343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.407376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.407393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.407402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:52:33.407411 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.407483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.407493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.407502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.407516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:52:33.407530 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.407538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.407547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.407558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.407567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:52:33.407575 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.407583 | orchestrator | 2026-02-04 00:52:33.407591 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-04 00:52:33.407600 | orchestrator | Wednesday 04 February 2026 00:46:57 +0000 (0:00:00.560) 0:00:23.073 **** 2026-02-04 00:52:33.407608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.407669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.407678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:52:33.407770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:52:33.407785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.407797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612', '__omit_place_holder__78926f3817b383d1ad3cff330d84ee3d02c18612'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 00:52:33.407805 | orchestrator | 2026-02-04 00:52:33.407814 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-04 00:52:33.407822 | orchestrator | Wednesday 04 February 2026 00:47:00 +0000 (0:00:02.640) 0:00:25.714 **** 2026-02-04 00:52:33.407830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.407898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.407907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.407921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.407929 | orchestrator | 2026-02-04 00:52:33.407937 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-04 00:52:33.407946 | orchestrator | Wednesday 04 February 2026 00:47:03 +0000 (0:00:03.108) 0:00:28.822 **** 2026-02-04 00:52:33.407954 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 00:52:33.408168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 00:52:33.408179 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 00:52:33.408184 | orchestrator | 2026-02-04 00:52:33.408189 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-04 00:52:33.408194 | orchestrator | Wednesday 04 February 2026 00:47:05 +0000 (0:00:01.938) 0:00:30.761 **** 2026-02-04 00:52:33.408198 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 00:52:33.408203 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 00:52:33.408208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 00:52:33.408213 | orchestrator | 2026-02-04 00:52:33.408218 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-04 00:52:33.408223 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:06.051) 0:00:36.812 **** 2026-02-04 00:52:33.408228 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.408233 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.408238 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.408243 | orchestrator | 2026-02-04 00:52:33.408248 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-04 00:52:33.408253 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:01.265) 0:00:38.078 **** 2026-02-04 00:52:33.408258 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 00:52:33.408265 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 00:52:33.408270 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 00:52:33.408274 | orchestrator | 2026-02-04 00:52:33.408279 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-04 00:52:33.408284 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:03.003) 0:00:41.081 **** 2026-02-04 00:52:33.408293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 00:52:33.408298 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 00:52:33.408303 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 00:52:33.408308 | orchestrator | 2026-02-04 00:52:33.408313 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-04 00:52:33.408323 | orchestrator | Wednesday 04 February 2026 00:47:18 +0000 (0:00:02.564) 0:00:43.646 **** 2026-02-04 00:52:33.408328 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-04 00:52:33.408333 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-04 00:52:33.408338 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-04 00:52:33.408343 | orchestrator | 2026-02-04 00:52:33.408348 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-04 00:52:33.408353 | orchestrator | Wednesday 04 February 2026 00:47:20 +0000 (0:00:01.933) 0:00:45.579 **** 2026-02-04 00:52:33.408357 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-04 00:52:33.408381 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-04 00:52:33.408386 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-04 00:52:33.408391 | orchestrator | 2026-02-04 00:52:33.408396 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 00:52:33.408401 | orchestrator | Wednesday 04 February 2026 00:47:21 +0000 (0:00:01.838) 0:00:47.418 **** 2026-02-04 00:52:33.408406 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.408411 | orchestrator | 2026-02-04 00:52:33.408416 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-04 00:52:33.408421 | orchestrator | Wednesday 04 February 2026 00:47:22 +0000 (0:00:00.992) 0:00:48.410 **** 2026-02-04 00:52:33.408427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.408438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.408444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.408449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.408461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.408466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.408472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.408477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.408485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.408490 | orchestrator | 2026-02-04 00:52:33.408495 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-04 00:52:33.408500 | orchestrator | Wednesday 04 February 2026 00:47:27 +0000 (0:00:04.326) 0:00:52.736 **** 2026-02-04 00:52:33.408505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408529 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.408534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408553 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.408558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408581 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.408586 | orchestrator | 2026-02-04 00:52:33.408591 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-04 00:52:33.408596 | orchestrator | Wednesday 04 February 2026 00:47:28 +0000 (0:00:00.765) 0:00:53.502 **** 2026-02-04 00:52:33.408601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408618 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.408624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408647 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.408652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408668 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.408673 | orchestrator | 2026-02-04 00:52:33.408678 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 00:52:33.408683 | orchestrator | Wednesday 04 February 2026 00:47:28 +0000 (0:00:00.705) 0:00:54.207 **** 2026-02-04 00:52:33.408690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408712 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.408717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408733 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.408740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408759 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.408764 | orchestrator | 2026-02-04 00:52:33.408769 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 00:52:33.408774 | orchestrator | Wednesday 04 February 2026 00:47:29 +0000 (0:00:01.082) 0:00:55.290 **** 2026-02-04 00:52:33.408783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408800 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.408806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408944 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.408953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.408959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.408965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.408971 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.408994 | orchestrator | 2026-02-04 00:52:33.409003 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 00:52:33.409016 | orchestrator | Wednesday 04 February 2026 00:47:30 +0000 (0:00:00.641) 0:00:55.931 **** 2026-02-04 00:52:33.409027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409615 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.409626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409636 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.409641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409669 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.409674 | orchestrator | 2026-02-04 00:52:33.409679 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-04 00:52:33.409685 | orchestrator | Wednesday 04 February 2026 00:47:31 +0000 (0:00:00.966) 0:00:56.898 **** 2026-02-04 00:52:33.409690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409793 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.409798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409823 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.409828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409847 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.409852 | orchestrator | 2026-02-04 00:52:33.409857 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-04 00:52:33.409862 | orchestrator | Wednesday 04 February 2026 00:47:32 +0000 (0:00:00.938) 0:00:57.836 **** 2026-02-04 00:52:33.409867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409890 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.409895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.409927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409947 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.409952 | orchestrator | 2026-02-04 00:52:33.409957 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-04 00:52:33.409968 | orchestrator | Wednesday 04 February 2026 00:47:33 +0000 (0:00:00.608) 0:00:58.445 **** 2026-02-04 00:52:33.409976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.409982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.409990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.409995 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.410000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.410010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.410063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.410071 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.410083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 00:52:33.410091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 00:52:33.410098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 00:52:33.410112 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.410120 | orchestrator | 2026-02-04 00:52:33.410128 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-04 00:52:33.410136 | orchestrator | Wednesday 04 February 2026 00:47:33 +0000 (0:00:00.745) 0:00:59.190 **** 2026-02-04 00:52:33.410161 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 00:52:33.410185 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 00:52:33.410196 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 00:52:33.410205 | orchestrator | 2026-02-04 00:52:33.410212 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-04 00:52:33.410222 | orchestrator | Wednesday 04 February 2026 00:47:35 +0000 (0:00:02.198) 0:01:01.389 **** 2026-02-04 00:52:33.410231 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 00:52:33.410240 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 00:52:33.410249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 00:52:33.410258 | orchestrator | 2026-02-04 00:52:33.410267 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-04 00:52:33.410276 | orchestrator | Wednesday 04 February 2026 00:47:37 +0000 (0:00:01.736) 0:01:03.125 **** 2026-02-04 00:52:33.410285 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 00:52:33.410294 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 00:52:33.410304 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 00:52:33.410313 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 00:52:33.410328 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.410386 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 00:52:33.410399 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.410411 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 00:52:33.410422 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.410432 | orchestrator | 2026-02-04 00:52:33.410440 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-04 00:52:33.410448 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:01.115) 0:01:04.240 **** 2026-02-04 00:52:33.410462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.410472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.410480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 00:52:33.410499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.410508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.410516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 00:52:33.410525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.410539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.410548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 00:52:33.410564 | orchestrator | 2026-02-04 00:52:33.410572 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-04 00:52:33.410580 | orchestrator | Wednesday 04 February 2026 00:47:41 +0000 (0:00:03.156) 0:01:07.397 **** 2026-02-04 00:52:33.410613 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.410621 | orchestrator | 2026-02-04 00:52:33.410629 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-04 00:52:33.410636 | orchestrator | Wednesday 04 February 2026 00:47:42 +0000 (0:00:00.724) 0:01:08.121 **** 2026-02-04 00:52:33.410650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 00:52:33.410657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.410662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 00:52:33.410687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.410692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 00:52:33.410749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.410759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410773 | orchestrator | 2026-02-04 00:52:33.410779 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-04 00:52:33.410784 | orchestrator | Wednesday 04 February 2026 00:47:47 +0000 (0:00:04.549) 0:01:12.670 **** 2026-02-04 00:52:33.410792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 00:52:33.410798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.410803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410813 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.410822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 00:52:33.410831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.410838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410848 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.410854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 00:52:33.410859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.410866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.410881 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.410886 | orchestrator | 2026-02-04 00:52:33.410891 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-04 00:52:33.410896 | orchestrator | Wednesday 04 February 2026 00:47:48 +0000 (0:00:00.990) 0:01:13.661 **** 2026-02-04 00:52:33.410901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:52:33.410907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:52:33.410913 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.410921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:52:33.410935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:52:33.410940 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.410945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:52:33.410950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 00:52:33.410955 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.410960 | orchestrator | 2026-02-04 00:52:33.410965 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-04 00:52:33.410970 | orchestrator | Wednesday 04 February 2026 00:47:49 +0000 (0:00:00.897) 0:01:14.558 **** 2026-02-04 00:52:33.410975 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.410980 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.410985 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.410989 | orchestrator | 2026-02-04 00:52:33.410994 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-04 00:52:33.410999 | orchestrator | Wednesday 04 February 2026 00:47:50 +0000 (0:00:01.217) 0:01:15.776 **** 2026-02-04 00:52:33.411049 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.411054 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.411059 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.411064 | orchestrator | 2026-02-04 00:52:33.411069 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-04 00:52:33.411078 | orchestrator | Wednesday 04 February 2026 00:47:52 +0000 (0:00:01.966) 0:01:17.742 **** 2026-02-04 00:52:33.411083 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.411087 | orchestrator | 2026-02-04 00:52:33.411092 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-04 00:52:33.411097 | orchestrator | Wednesday 04 February 2026 00:47:53 +0000 (0:00:01.266) 0:01:19.008 **** 2026-02-04 00:52:33.411107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.411113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.411133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.411167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411217 | orchestrator | 2026-02-04 00:52:33.411225 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-04 00:52:33.411234 | orchestrator | Wednesday 04 February 2026 00:47:56 +0000 (0:00:03.149) 0:01:22.157 **** 2026-02-04 00:52:33.411241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.411255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411278 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.411301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.411311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411347 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.411355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.411388 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411394 | orchestrator | 2026-02-04 00:52:33.411400 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-04 00:52:33.411406 | orchestrator | Wednesday 04 February 2026 00:47:57 +0000 (0:00:00.540) 0:01:22.698 **** 2026-02-04 00:52:33.411412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:52:33.411418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:52:33.411424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:52:33.411438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:52:33.411448 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411459 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:52:33.411483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 00:52:33.411490 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.411499 | orchestrator | 2026-02-04 00:52:33.411508 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-04 00:52:33.411516 | orchestrator | Wednesday 04 February 2026 00:47:58 +0000 (0:00:00.871) 0:01:23.570 **** 2026-02-04 00:52:33.411524 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.411533 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.411538 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.411543 | orchestrator | 2026-02-04 00:52:33.411547 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-04 00:52:33.411552 | orchestrator | Wednesday 04 February 2026 00:47:59 +0000 (0:00:01.321) 0:01:24.892 **** 2026-02-04 00:52:33.411557 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.411563 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.411571 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.411578 | orchestrator | 2026-02-04 00:52:33.411590 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-04 00:52:33.411599 | orchestrator | Wednesday 04 February 2026 00:48:01 +0000 (0:00:02.094) 0:01:26.987 **** 2026-02-04 00:52:33.411607 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411615 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411622 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.411629 | orchestrator | 2026-02-04 00:52:33.411636 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-04 00:52:33.411643 | orchestrator | Wednesday 04 February 2026 00:48:01 +0000 (0:00:00.259) 0:01:27.247 **** 2026-02-04 00:52:33.411651 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.411659 | orchestrator | 2026-02-04 00:52:33.411667 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-04 00:52:33.411675 | orchestrator | Wednesday 04 February 2026 00:48:02 +0000 (0:00:00.734) 0:01:27.981 **** 2026-02-04 00:52:33.411691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 00:52:33.411700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 00:52:33.411747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 00:52:33.411785 | orchestrator | 2026-02-04 00:52:33.411791 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-04 00:52:33.411796 | orchestrator | Wednesday 04 February 2026 00:48:04 +0000 (0:00:02.448) 0:01:30.430 **** 2026-02-04 00:52:33.411801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 00:52:33.411806 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 00:52:33.411816 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 00:52:33.411830 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.411835 | orchestrator | 2026-02-04 00:52:33.411840 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-04 00:52:33.411869 | orchestrator | Wednesday 04 February 2026 00:48:06 +0000 (0:00:01.343) 0:01:31.773 **** 2026-02-04 00:52:33.411881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:52:33.411887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:52:33.411894 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:52:33.411907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:52:33.411912 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:52:33.411922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 00:52:33.411927 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.411932 | orchestrator | 2026-02-04 00:52:33.411936 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-04 00:52:33.411941 | orchestrator | Wednesday 04 February 2026 00:48:07 +0000 (0:00:01.506) 0:01:33.279 **** 2026-02-04 00:52:33.411946 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411951 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411956 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.411968 | orchestrator | 2026-02-04 00:52:33.411973 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-04 00:52:33.411978 | orchestrator | Wednesday 04 February 2026 00:48:08 +0000 (0:00:00.733) 0:01:34.013 **** 2026-02-04 00:52:33.411982 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.411987 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.411999 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.412007 | orchestrator | 2026-02-04 00:52:33.412014 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-04 00:52:33.412027 | orchestrator | Wednesday 04 February 2026 00:48:09 +0000 (0:00:01.228) 0:01:35.241 **** 2026-02-04 00:52:33.412035 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.412048 | orchestrator | 2026-02-04 00:52:33.412056 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-04 00:52:33.412063 | orchestrator | Wednesday 04 February 2026 00:48:10 +0000 (0:00:00.773) 0:01:36.015 **** 2026-02-04 00:52:33.412073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.412086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.412127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.412151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412174 | orchestrator | 2026-02-04 00:52:33.412179 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-04 00:52:33.412184 | orchestrator | Wednesday 04 February 2026 00:48:14 +0000 (0:00:03.638) 0:01:39.654 **** 2026-02-04 00:52:33.412192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.412197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412220 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.412225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.412230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412248 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.412253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.412267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412287 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.412293 | orchestrator | 2026-02-04 00:52:33.412297 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-04 00:52:33.412302 | orchestrator | Wednesday 04 February 2026 00:48:15 +0000 (0:00:01.281) 0:01:40.935 **** 2026-02-04 00:52:33.412315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:52:33.412320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:52:33.412326 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.412331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:52:33.412335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:52:33.412345 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.412350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:52:33.412355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 00:52:33.412384 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.412389 | orchestrator | 2026-02-04 00:52:33.412394 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-04 00:52:33.412399 | orchestrator | Wednesday 04 February 2026 00:48:16 +0000 (0:00:01.015) 0:01:41.951 **** 2026-02-04 00:52:33.412404 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.412409 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.412414 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.412419 | orchestrator | 2026-02-04 00:52:33.412424 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-04 00:52:33.412429 | orchestrator | Wednesday 04 February 2026 00:48:17 +0000 (0:00:01.338) 0:01:43.289 **** 2026-02-04 00:52:33.412434 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.412477 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.412486 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.412494 | orchestrator | 2026-02-04 00:52:33.412506 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-04 00:52:33.412514 | orchestrator | Wednesday 04 February 2026 00:48:19 +0000 (0:00:01.929) 0:01:45.219 **** 2026-02-04 00:52:33.412522 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.412529 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.412537 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.412544 | orchestrator | 2026-02-04 00:52:33.412552 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-04 00:52:33.412560 | orchestrator | Wednesday 04 February 2026 00:48:20 +0000 (0:00:00.507) 0:01:45.726 **** 2026-02-04 00:52:33.412567 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.412608 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.412616 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.412624 | orchestrator | 2026-02-04 00:52:33.412631 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-04 00:52:33.412638 | orchestrator | Wednesday 04 February 2026 00:48:20 +0000 (0:00:00.356) 0:01:46.083 **** 2026-02-04 00:52:33.412646 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.412654 | orchestrator | 2026-02-04 00:52:33.412662 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-04 00:52:33.412669 | orchestrator | Wednesday 04 February 2026 00:48:21 +0000 (0:00:00.718) 0:01:46.801 **** 2026-02-04 00:52:33.412683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 00:52:33.412693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:52:33.412735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.412745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 00:52:33.413406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:52:33.413450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 00:52:33.413607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:52:33.413616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413670 | orchestrator | 2026-02-04 00:52:33.413678 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-04 00:52:33.413687 | orchestrator | Wednesday 04 February 2026 00:48:25 +0000 (0:00:04.406) 0:01:51.208 **** 2026-02-04 00:52:33.413695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 00:52:33.413707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:52:33.413715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413765 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.413774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 00:52:33.413787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:52:33.413795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413838 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.413851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 00:52:33.413922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 00:52:33.413950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.413997 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.414004 | orchestrator | 2026-02-04 00:52:33.414012 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-04 00:52:33.414064 | orchestrator | Wednesday 04 February 2026 00:48:26 +0000 (0:00:00.837) 0:01:52.045 **** 2026-02-04 00:52:33.414073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:52:33.414084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:52:33.414100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:52:33.414110 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.414119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:52:33.414128 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.414137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:52:33.414146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 00:52:33.414155 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.414175 | orchestrator | 2026-02-04 00:52:33.414184 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-04 00:52:33.414205 | orchestrator | Wednesday 04 February 2026 00:48:27 +0000 (0:00:00.919) 0:01:52.965 **** 2026-02-04 00:52:33.414214 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.414224 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.414232 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.414241 | orchestrator | 2026-02-04 00:52:33.414250 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-04 00:52:33.414259 | orchestrator | Wednesday 04 February 2026 00:48:28 +0000 (0:00:01.370) 0:01:54.335 **** 2026-02-04 00:52:33.414267 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.414276 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.414284 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.414293 | orchestrator | 2026-02-04 00:52:33.414303 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-04 00:52:33.414311 | orchestrator | Wednesday 04 February 2026 00:48:30 +0000 (0:00:01.731) 0:01:56.067 **** 2026-02-04 00:52:33.414320 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.414329 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.414337 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.414346 | orchestrator | 2026-02-04 00:52:33.414354 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-04 00:52:33.414409 | orchestrator | Wednesday 04 February 2026 00:48:31 +0000 (0:00:00.567) 0:01:56.634 **** 2026-02-04 00:52:33.414418 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.414427 | orchestrator | 2026-02-04 00:52:33.414435 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-04 00:52:33.414443 | orchestrator | Wednesday 04 February 2026 00:48:31 +0000 (0:00:00.771) 0:01:57.406 **** 2026-02-04 00:52:33.414472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 00:52:33.414495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.414506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 00:52:33.414531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.414541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 00:52:33.414561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.414570 | orchestrator | 2026-02-04 00:52:33.414578 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-04 00:52:33.414587 | orchestrator | Wednesday 04 February 2026 00:48:36 +0000 (0:00:04.265) 0:02:01.671 **** 2026-02-04 00:52:33.414600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 00:52:33.414615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.414628 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.414639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 00:52:33.414652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.414665 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.414678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 00:52:33.414692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.414706 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.414714 | orchestrator | 2026-02-04 00:52:33.414721 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-04 00:52:33.414729 | orchestrator | Wednesday 04 February 2026 00:48:39 +0000 (0:00:03.271) 0:02:04.943 **** 2026-02-04 00:52:33.414737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:52:33.414745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:52:33.414753 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.414765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:52:33.414773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:52:33.414782 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.414790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:52:33.414803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 00:52:33.414810 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.414819 | orchestrator | 2026-02-04 00:52:33.414827 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-04 00:52:33.414834 | orchestrator | Wednesday 04 February 2026 00:48:43 +0000 (0:00:04.261) 0:02:09.204 **** 2026-02-04 00:52:33.414841 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.414848 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.414855 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.414862 | orchestrator | 2026-02-04 00:52:33.414869 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-04 00:52:33.414876 | orchestrator | Wednesday 04 February 2026 00:48:45 +0000 (0:00:01.344) 0:02:10.548 **** 2026-02-04 00:52:33.414884 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.414891 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.414899 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.414906 | orchestrator | 2026-02-04 00:52:33.414945 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-04 00:52:33.414953 | orchestrator | Wednesday 04 February 2026 00:48:47 +0000 (0:00:02.209) 0:02:12.758 **** 2026-02-04 00:52:33.414961 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.415004 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.415013 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.415021 | orchestrator | 2026-02-04 00:52:33.415028 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-04 00:52:33.415036 | orchestrator | Wednesday 04 February 2026 00:48:47 +0000 (0:00:00.579) 0:02:13.338 **** 2026-02-04 00:52:33.415044 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.415051 | orchestrator | 2026-02-04 00:52:33.415058 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-04 00:52:33.415066 | orchestrator | Wednesday 04 February 2026 00:48:48 +0000 (0:00:00.807) 0:02:14.145 **** 2026-02-04 00:52:33.415074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 00:52:33.415196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 00:52:33.415218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 00:52:33.415227 | orchestrator | 2026-02-04 00:52:33.415236 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-04 00:52:33.415243 | orchestrator | Wednesday 04 February 2026 00:48:52 +0000 (0:00:03.596) 0:02:17.741 **** 2026-02-04 00:52:33.415251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 00:52:33.415288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 00:52:33.415296 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.415305 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.415314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 00:52:33.415323 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.415331 | orchestrator | 2026-02-04 00:52:33.415339 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-04 00:52:33.415347 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:00.731) 0:02:18.473 **** 2026-02-04 00:52:33.415355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:52:33.415387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:52:33.415402 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.415415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:52:33.415424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:52:33.415432 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.415439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:52:33.415447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 00:52:33.415455 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.415462 | orchestrator | 2026-02-04 00:52:33.415470 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-04 00:52:33.415478 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:00.797) 0:02:19.271 **** 2026-02-04 00:52:33.415485 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.415492 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.415499 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.415507 | orchestrator | 2026-02-04 00:52:33.415516 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-04 00:52:33.415524 | orchestrator | Wednesday 04 February 2026 00:48:55 +0000 (0:00:01.863) 0:02:21.135 **** 2026-02-04 00:52:33.415532 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.415540 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.415548 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.415557 | orchestrator | 2026-02-04 00:52:33.415565 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-04 00:52:33.415573 | orchestrator | Wednesday 04 February 2026 00:48:57 +0000 (0:00:02.036) 0:02:23.171 **** 2026-02-04 00:52:33.415582 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.415590 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.415599 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.415606 | orchestrator | 2026-02-04 00:52:33.415615 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-04 00:52:33.415623 | orchestrator | Wednesday 04 February 2026 00:48:58 +0000 (0:00:00.413) 0:02:23.584 **** 2026-02-04 00:52:33.415631 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.415639 | orchestrator | 2026-02-04 00:52:33.415647 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-04 00:52:33.415674 | orchestrator | Wednesday 04 February 2026 00:48:58 +0000 (0:00:00.712) 0:02:24.297 **** 2026-02-04 00:52:33.415699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:52:33.415718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:52:33.415738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:52:33.415753 | orchestrator | 2026-02-04 00:52:33.415760 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-04 00:52:33.415768 | orchestrator | Wednesday 04 February 2026 00:49:02 +0000 (0:00:03.722) 0:02:28.019 **** 2026-02-04 00:52:33.415781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:52:33.415790 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.415802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:52:33.415816 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.415830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:52:33.415845 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.415853 | orchestrator | 2026-02-04 00:52:33.415861 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-04 00:52:33.415869 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:00.964) 0:02:28.984 **** 2026-02-04 00:52:33.415877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:52:33.415885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:52:33.415898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:52:33.415904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:52:33.415909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:52:33.415916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:52:33.415921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:52:33.415960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:52:33.415967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 00:52:33.415972 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.415977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 00:52:33.415982 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.415995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:52:33.416001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:52:33.416007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 00:52:33.416012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 00:52:33.416017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 00:52:33.416022 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.416027 | orchestrator | 2026-02-04 00:52:33.416032 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-04 00:52:33.416037 | orchestrator | Wednesday 04 February 2026 00:49:04 +0000 (0:00:00.827) 0:02:29.812 **** 2026-02-04 00:52:33.416041 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.416046 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.416051 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.416056 | orchestrator | 2026-02-04 00:52:33.416065 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-04 00:52:33.416070 | orchestrator | Wednesday 04 February 2026 00:49:05 +0000 (0:00:01.181) 0:02:30.993 **** 2026-02-04 00:52:33.416074 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.416079 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.416084 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.416089 | orchestrator | 2026-02-04 00:52:33.416097 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-04 00:52:33.416105 | orchestrator | Wednesday 04 February 2026 00:49:07 +0000 (0:00:01.955) 0:02:32.948 **** 2026-02-04 00:52:33.416115 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.416125 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.416134 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.416141 | orchestrator | 2026-02-04 00:52:33.416148 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-04 00:52:33.416154 | orchestrator | Wednesday 04 February 2026 00:49:07 +0000 (0:00:00.270) 0:02:33.218 **** 2026-02-04 00:52:33.416162 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.416170 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.416177 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.416184 | orchestrator | 2026-02-04 00:52:33.416191 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-04 00:52:33.416199 | orchestrator | Wednesday 04 February 2026 00:49:08 +0000 (0:00:00.393) 0:02:33.612 **** 2026-02-04 00:52:33.416206 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.416213 | orchestrator | 2026-02-04 00:52:33.416220 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-04 00:52:33.416227 | orchestrator | Wednesday 04 February 2026 00:49:09 +0000 (0:00:00.866) 0:02:34.478 **** 2026-02-04 00:52:33.416242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:52:33.416257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:52:33.416266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:52:33.416279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:52:33.416287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:52:33.416300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:52:33.416314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:52:33.416323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:52:33.416332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:52:33.416340 | orchestrator | 2026-02-04 00:52:33.416348 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-04 00:52:33.416405 | orchestrator | Wednesday 04 February 2026 00:49:11 +0000 (0:00:02.838) 0:02:37.316 **** 2026-02-04 00:52:33.416418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:52:33.416437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:52:33.416445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:52:33.416453 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.416466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:52:33.416475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:52:33.416486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:52:33.416494 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.416829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:52:33.416847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:52:33.416853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:52:33.416857 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.416862 | orchestrator | 2026-02-04 00:52:33.416867 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-04 00:52:33.416872 | orchestrator | Wednesday 04 February 2026 00:49:12 +0000 (0:00:00.701) 0:02:38.018 **** 2026-02-04 00:52:33.416879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:52:33.416887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:52:33.416898 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.416908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:52:33.416922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:52:33.416930 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.416937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:52:33.416959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 00:52:33.416967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.416974 | orchestrator | 2026-02-04 00:52:33.416981 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-04 00:52:33.416989 | orchestrator | Wednesday 04 February 2026 00:49:13 +0000 (0:00:00.726) 0:02:38.745 **** 2026-02-04 00:52:33.416996 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.417002 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.417007 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.417012 | orchestrator | 2026-02-04 00:52:33.417016 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-04 00:52:33.417021 | orchestrator | Wednesday 04 February 2026 00:49:14 +0000 (0:00:01.432) 0:02:40.178 **** 2026-02-04 00:52:33.417026 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.417030 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.417035 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.417040 | orchestrator | 2026-02-04 00:52:33.417044 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-04 00:52:33.417049 | orchestrator | Wednesday 04 February 2026 00:49:16 +0000 (0:00:02.015) 0:02:42.193 **** 2026-02-04 00:52:33.417054 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.417058 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.417063 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.417068 | orchestrator | 2026-02-04 00:52:33.417072 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-04 00:52:33.417077 | orchestrator | Wednesday 04 February 2026 00:49:17 +0000 (0:00:00.540) 0:02:42.734 **** 2026-02-04 00:52:33.417082 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.417086 | orchestrator | 2026-02-04 00:52:33.417091 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-04 00:52:33.417096 | orchestrator | Wednesday 04 February 2026 00:49:18 +0000 (0:00:01.002) 0:02:43.737 **** 2026-02-04 00:52:33.417101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 00:52:33.417107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 00:52:33.417122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 00:52:33.417143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417148 | orchestrator | 2026-02-04 00:52:33.417152 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-04 00:52:33.417169 | orchestrator | Wednesday 04 February 2026 00:49:22 +0000 (0:00:04.241) 0:02:47.979 **** 2026-02-04 00:52:33.417185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 00:52:33.417206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417214 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.417227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 00:52:33.417236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417243 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.417251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 00:52:33.417264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417271 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.417279 | orchestrator | 2026-02-04 00:52:33.417290 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-04 00:52:33.417299 | orchestrator | Wednesday 04 February 2026 00:49:23 +0000 (0:00:00.926) 0:02:48.905 **** 2026-02-04 00:52:33.417304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:52:33.417310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:52:33.417318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:52:33.417323 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.417328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:52:33.417332 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.417337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:52:33.417342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 00:52:33.417347 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.417352 | orchestrator | 2026-02-04 00:52:33.417356 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-04 00:52:33.417380 | orchestrator | Wednesday 04 February 2026 00:49:24 +0000 (0:00:01.131) 0:02:50.037 **** 2026-02-04 00:52:33.417385 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.417389 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.417394 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.417399 | orchestrator | 2026-02-04 00:52:33.417403 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-04 00:52:33.417408 | orchestrator | Wednesday 04 February 2026 00:49:25 +0000 (0:00:01.227) 0:02:51.265 **** 2026-02-04 00:52:33.417463 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.417469 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.417475 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.417505 | orchestrator | 2026-02-04 00:52:33.417510 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-04 00:52:33.417516 | orchestrator | Wednesday 04 February 2026 00:49:27 +0000 (0:00:01.907) 0:02:53.172 **** 2026-02-04 00:52:33.417521 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.417532 | orchestrator | 2026-02-04 00:52:33.417537 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-04 00:52:33.417542 | orchestrator | Wednesday 04 February 2026 00:49:28 +0000 (0:00:01.123) 0:02:54.295 **** 2026-02-04 00:52:33.417548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 00:52:33.417555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 00:52:33.417585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 00:52:33.417653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417701 | orchestrator | 2026-02-04 00:52:33.417706 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-04 00:52:33.417712 | orchestrator | Wednesday 04 February 2026 00:49:33 +0000 (0:00:04.234) 0:02:58.530 **** 2026-02-04 00:52:33.417718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 00:52:33.417723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417745 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.417751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 00:52:33.417761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417778 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.417788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 00:52:33.417794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.417813 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.417817 | orchestrator | 2026-02-04 00:52:33.417822 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-04 00:52:33.417827 | orchestrator | Wednesday 04 February 2026 00:49:33 +0000 (0:00:00.698) 0:02:59.228 **** 2026-02-04 00:52:33.417832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:52:33.417837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:52:33.417842 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.417847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:52:33.417851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:52:33.417856 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.417860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:52:33.417868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 00:52:33.417873 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.417877 | orchestrator | 2026-02-04 00:52:33.417882 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-04 00:52:33.417887 | orchestrator | Wednesday 04 February 2026 00:49:34 +0000 (0:00:01.180) 0:03:00.409 **** 2026-02-04 00:52:33.417894 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.417901 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.417908 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.417915 | orchestrator | 2026-02-04 00:52:33.417926 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-04 00:52:33.417937 | orchestrator | Wednesday 04 February 2026 00:49:36 +0000 (0:00:01.357) 0:03:01.766 **** 2026-02-04 00:52:33.417954 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.417962 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.417969 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.417976 | orchestrator | 2026-02-04 00:52:33.417983 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-04 00:52:33.417992 | orchestrator | Wednesday 04 February 2026 00:49:38 +0000 (0:00:02.086) 0:03:03.853 **** 2026-02-04 00:52:33.417999 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.418007 | orchestrator | 2026-02-04 00:52:33.418044 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-04 00:52:33.418055 | orchestrator | Wednesday 04 February 2026 00:49:39 +0000 (0:00:01.293) 0:03:05.146 **** 2026-02-04 00:52:33.418062 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 00:52:33.418071 | orchestrator | 2026-02-04 00:52:33.418078 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-04 00:52:33.418086 | orchestrator | Wednesday 04 February 2026 00:49:42 +0000 (0:00:02.932) 0:03:08.079 **** 2026-02-04 00:52:33.418096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:52:33.418103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:52:33.418120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:52:33.418141 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:52:33.418213 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:52:33.418241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:52:33.418254 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418260 | orchestrator | 2026-02-04 00:52:33.418264 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-04 00:52:33.418269 | orchestrator | Wednesday 04 February 2026 00:49:44 +0000 (0:00:01.952) 0:03:10.031 **** 2026-02-04 00:52:33.418274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:52:33.418280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:52:33.418284 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:52:33.418345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:52:33.418351 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:52:33.418381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 00:52:33.418390 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418395 | orchestrator | 2026-02-04 00:52:33.418400 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-04 00:52:33.418405 | orchestrator | Wednesday 04 February 2026 00:49:46 +0000 (0:00:01.954) 0:03:11.986 **** 2026-02-04 00:52:33.418413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:52:33.418419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:52:33.418424 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:52:33.418433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:52:33.418438 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:52:33.418448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 00:52:33.418457 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418461 | orchestrator | 2026-02-04 00:52:33.418466 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-04 00:52:33.418473 | orchestrator | Wednesday 04 February 2026 00:49:48 +0000 (0:00:02.438) 0:03:14.424 **** 2026-02-04 00:52:33.418478 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.418483 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.418487 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.418492 | orchestrator | 2026-02-04 00:52:33.418497 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-04 00:52:33.418501 | orchestrator | Wednesday 04 February 2026 00:49:50 +0000 (0:00:01.839) 0:03:16.263 **** 2026-02-04 00:52:33.418506 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418510 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418515 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418520 | orchestrator | 2026-02-04 00:52:33.418524 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-04 00:52:33.418529 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:01.212) 0:03:17.476 **** 2026-02-04 00:52:33.418534 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418544 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418549 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418555 | orchestrator | 2026-02-04 00:52:33.418563 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-04 00:52:33.418571 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:00.272) 0:03:17.749 **** 2026-02-04 00:52:33.418578 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.418586 | orchestrator | 2026-02-04 00:52:33.418593 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-04 00:52:33.418601 | orchestrator | Wednesday 04 February 2026 00:49:53 +0000 (0:00:01.224) 0:03:18.973 **** 2026-02-04 00:52:33.418608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 00:52:33.418617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 00:52:33.418631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 00:52:33.418639 | orchestrator | 2026-02-04 00:52:33.418646 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-04 00:52:33.418653 | orchestrator | Wednesday 04 February 2026 00:49:55 +0000 (0:00:01.570) 0:03:20.543 **** 2026-02-04 00:52:33.418665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 00:52:33.418670 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 00:52:33.418684 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 00:52:33.418694 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418698 | orchestrator | 2026-02-04 00:52:33.418703 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-04 00:52:33.418708 | orchestrator | Wednesday 04 February 2026 00:49:55 +0000 (0:00:00.356) 0:03:20.900 **** 2026-02-04 00:52:33.418713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 00:52:33.418722 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 00:52:33.418732 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 00:52:33.418741 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418769 | orchestrator | 2026-02-04 00:52:33.418774 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-04 00:52:33.418778 | orchestrator | Wednesday 04 February 2026 00:49:56 +0000 (0:00:00.705) 0:03:21.605 **** 2026-02-04 00:52:33.418783 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418788 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418792 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418797 | orchestrator | 2026-02-04 00:52:33.418836 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-04 00:52:33.418842 | orchestrator | Wednesday 04 February 2026 00:49:56 +0000 (0:00:00.392) 0:03:21.998 **** 2026-02-04 00:52:33.418846 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418851 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418856 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418861 | orchestrator | 2026-02-04 00:52:33.418865 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-04 00:52:33.418896 | orchestrator | Wednesday 04 February 2026 00:49:57 +0000 (0:00:01.139) 0:03:23.137 **** 2026-02-04 00:52:33.418902 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.418906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.418911 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.418916 | orchestrator | 2026-02-04 00:52:33.418920 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-04 00:52:33.418925 | orchestrator | Wednesday 04 February 2026 00:49:57 +0000 (0:00:00.284) 0:03:23.422 **** 2026-02-04 00:52:33.418930 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.418947 | orchestrator | 2026-02-04 00:52:33.418956 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-04 00:52:33.418960 | orchestrator | Wednesday 04 February 2026 00:49:59 +0000 (0:00:01.259) 0:03:24.682 **** 2026-02-04 00:52:33.418970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 00:52:33.418976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:52:33.419032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 00:52:33.419130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.419189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:52:33.419229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 00:52:33.419333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.419341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:52:33.419445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.419549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419590 | orchestrator | 2026-02-04 00:52:33.419608 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-04 00:52:33.419614 | orchestrator | Wednesday 04 February 2026 00:50:03 +0000 (0:00:03.967) 0:03:28.649 **** 2026-02-04 00:52:33.419619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 00:52:33.419624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:52:33.419676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 00:52:33.419715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:52:33.419821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 00:52:33.419859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.419882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419908 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.419916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.419921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 00:52:33.419975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.419993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.419998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.420003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.420008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.420013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.420022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.420031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.420036 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.420041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.420046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.420050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 00:52:33.420059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 00:52:33.420078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.420357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 00:52:33.420424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 00:52:33.420433 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.420442 | orchestrator | 2026-02-04 00:52:33.420450 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-04 00:52:33.420458 | orchestrator | Wednesday 04 February 2026 00:50:04 +0000 (0:00:01.301) 0:03:29.950 **** 2026-02-04 00:52:33.420466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:52:33.420474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:52:33.420490 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.420498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:52:33.420555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:52:33.420561 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.420566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:52:33.420571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 00:52:33.420575 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.420580 | orchestrator | 2026-02-04 00:52:33.420585 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-04 00:52:33.420589 | orchestrator | Wednesday 04 February 2026 00:50:06 +0000 (0:00:01.613) 0:03:31.564 **** 2026-02-04 00:52:33.420594 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.420599 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.420603 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.420608 | orchestrator | 2026-02-04 00:52:33.420612 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-04 00:52:33.420617 | orchestrator | Wednesday 04 February 2026 00:50:07 +0000 (0:00:01.210) 0:03:32.775 **** 2026-02-04 00:52:33.420622 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.420626 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.420631 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.420636 | orchestrator | 2026-02-04 00:52:33.420640 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-04 00:52:33.420645 | orchestrator | Wednesday 04 February 2026 00:50:09 +0000 (0:00:01.762) 0:03:34.538 **** 2026-02-04 00:52:33.420649 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.420654 | orchestrator | 2026-02-04 00:52:33.420661 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-04 00:52:33.420673 | orchestrator | Wednesday 04 February 2026 00:50:10 +0000 (0:00:01.124) 0:03:35.662 **** 2026-02-04 00:52:33.420689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.420698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.420708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.420713 | orchestrator | 2026-02-04 00:52:33.420717 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-04 00:52:33.420722 | orchestrator | Wednesday 04 February 2026 00:50:13 +0000 (0:00:03.049) 0:03:38.712 **** 2026-02-04 00:52:33.420727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.420732 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.420757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.420762 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.420767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.420775 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.420780 | orchestrator | 2026-02-04 00:52:33.420785 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-04 00:52:33.420793 | orchestrator | Wednesday 04 February 2026 00:50:13 +0000 (0:00:00.440) 0:03:39.153 **** 2026-02-04 00:52:33.420801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:52:33.420810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:52:33.420818 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.420825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:52:33.420833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:52:33.420840 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.420849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:52:33.420855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 00:52:33.420859 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.420865 | orchestrator | 2026-02-04 00:52:33.420874 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-04 00:52:33.420887 | orchestrator | Wednesday 04 February 2026 00:50:14 +0000 (0:00:00.640) 0:03:39.793 **** 2026-02-04 00:52:33.420894 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.420901 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.420908 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.420915 | orchestrator | 2026-02-04 00:52:33.420922 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-04 00:52:33.420929 | orchestrator | Wednesday 04 February 2026 00:50:16 +0000 (0:00:01.652) 0:03:41.446 **** 2026-02-04 00:52:33.420936 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.420943 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.420950 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.420957 | orchestrator | 2026-02-04 00:52:33.420964 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-04 00:52:33.420971 | orchestrator | Wednesday 04 February 2026 00:50:17 +0000 (0:00:01.880) 0:03:43.326 **** 2026-02-04 00:52:33.421015 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.421028 | orchestrator | 2026-02-04 00:52:33.421036 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-04 00:52:33.421042 | orchestrator | Wednesday 04 February 2026 00:50:19 +0000 (0:00:01.336) 0:03:44.663 **** 2026-02-04 00:52:33.421058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.421073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.421105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.421125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421156 | orchestrator | 2026-02-04 00:52:33.421163 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-04 00:52:33.421170 | orchestrator | Wednesday 04 February 2026 00:50:22 +0000 (0:00:03.678) 0:03:48.341 **** 2026-02-04 00:52:33.421186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.421200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421215 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.421223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.421231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421256 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.421270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.421278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.421294 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.421302 | orchestrator | 2026-02-04 00:52:33.421309 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-04 00:52:33.421316 | orchestrator | Wednesday 04 February 2026 00:50:24 +0000 (0:00:01.101) 0:03:49.443 **** 2026-02-04 00:52:33.421324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421420 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.421428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421456 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.421463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 00:52:33.421493 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.421501 | orchestrator | 2026-02-04 00:52:33.421508 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-04 00:52:33.421515 | orchestrator | Wednesday 04 February 2026 00:50:24 +0000 (0:00:00.867) 0:03:50.310 **** 2026-02-04 00:52:33.421523 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.421530 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.421539 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.421545 | orchestrator | 2026-02-04 00:52:33.421553 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-04 00:52:33.421560 | orchestrator | Wednesday 04 February 2026 00:50:26 +0000 (0:00:01.485) 0:03:51.795 **** 2026-02-04 00:52:33.421567 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.421574 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.421580 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.421588 | orchestrator | 2026-02-04 00:52:33.421595 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-04 00:52:33.421602 | orchestrator | Wednesday 04 February 2026 00:50:28 +0000 (0:00:02.075) 0:03:53.871 **** 2026-02-04 00:52:33.421609 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.421617 | orchestrator | 2026-02-04 00:52:33.421631 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-04 00:52:33.421639 | orchestrator | Wednesday 04 February 2026 00:50:29 +0000 (0:00:01.373) 0:03:55.244 **** 2026-02-04 00:52:33.421646 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-04 00:52:33.421653 | orchestrator | 2026-02-04 00:52:33.421659 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-04 00:52:33.421665 | orchestrator | Wednesday 04 February 2026 00:50:30 +0000 (0:00:00.803) 0:03:56.048 **** 2026-02-04 00:52:33.421673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 00:52:33.421685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 00:52:33.421692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 00:52:33.421698 | orchestrator | 2026-02-04 00:52:33.421709 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-04 00:52:33.421716 | orchestrator | Wednesday 04 February 2026 00:50:34 +0000 (0:00:03.897) 0:03:59.946 **** 2026-02-04 00:52:33.421723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.421730 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.421736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.421743 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.421749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.421761 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.421767 | orchestrator | 2026-02-04 00:52:33.421774 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-04 00:52:33.421780 | orchestrator | Wednesday 04 February 2026 00:50:35 +0000 (0:00:00.915) 0:04:00.861 **** 2026-02-04 00:52:33.421786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:52:33.421793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:52:33.421799 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.421805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:52:33.421812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:52:33.421818 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.421825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:52:33.421832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 00:52:33.421846 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.421854 | orchestrator | 2026-02-04 00:52:33.421860 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 00:52:33.421867 | orchestrator | Wednesday 04 February 2026 00:50:36 +0000 (0:00:01.327) 0:04:02.188 **** 2026-02-04 00:52:33.421874 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.421881 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.421889 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.421895 | orchestrator | 2026-02-04 00:52:33.421901 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 00:52:33.421908 | orchestrator | Wednesday 04 February 2026 00:50:38 +0000 (0:00:02.085) 0:04:04.274 **** 2026-02-04 00:52:33.421914 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.421921 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.421928 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.421935 | orchestrator | 2026-02-04 00:52:33.421947 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-04 00:52:33.421954 | orchestrator | Wednesday 04 February 2026 00:50:41 +0000 (0:00:02.854) 0:04:07.128 **** 2026-02-04 00:52:33.421961 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-04 00:52:33.421969 | orchestrator | 2026-02-04 00:52:33.421975 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-04 00:52:33.421981 | orchestrator | Wednesday 04 February 2026 00:50:43 +0000 (0:00:01.306) 0:04:08.435 **** 2026-02-04 00:52:33.421989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.422003 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.422069 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.422083 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422090 | orchestrator | 2026-02-04 00:52:33.422097 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-04 00:52:33.422105 | orchestrator | Wednesday 04 February 2026 00:50:44 +0000 (0:00:01.111) 0:04:09.546 **** 2026-02-04 00:52:33.422110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.422114 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.422128 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 00:52:33.422136 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422141 | orchestrator | 2026-02-04 00:52:33.422150 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-04 00:52:33.422156 | orchestrator | Wednesday 04 February 2026 00:50:45 +0000 (0:00:01.070) 0:04:10.617 **** 2026-02-04 00:52:33.422167 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422178 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422185 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422191 | orchestrator | 2026-02-04 00:52:33.422198 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 00:52:33.422205 | orchestrator | Wednesday 04 February 2026 00:50:46 +0000 (0:00:01.464) 0:04:12.081 **** 2026-02-04 00:52:33.422211 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.422218 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.422224 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.422230 | orchestrator | 2026-02-04 00:52:33.422237 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 00:52:33.422243 | orchestrator | Wednesday 04 February 2026 00:50:48 +0000 (0:00:02.076) 0:04:14.157 **** 2026-02-04 00:52:33.422250 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.422257 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.422264 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.422271 | orchestrator | 2026-02-04 00:52:33.422277 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-04 00:52:33.422285 | orchestrator | Wednesday 04 February 2026 00:50:51 +0000 (0:00:02.975) 0:04:17.133 **** 2026-02-04 00:52:33.422290 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-04 00:52:33.422295 | orchestrator | 2026-02-04 00:52:33.422299 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-04 00:52:33.422303 | orchestrator | Wednesday 04 February 2026 00:50:52 +0000 (0:00:00.883) 0:04:18.016 **** 2026-02-04 00:52:33.422308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:52:33.422312 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:52:33.422321 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:52:33.422330 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422334 | orchestrator | 2026-02-04 00:52:33.422338 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-04 00:52:33.422343 | orchestrator | Wednesday 04 February 2026 00:50:53 +0000 (0:00:01.124) 0:04:19.140 **** 2026-02-04 00:52:33.422351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:52:33.422377 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:52:33.422391 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 00:52:33.422400 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422404 | orchestrator | 2026-02-04 00:52:33.422408 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-04 00:52:33.422413 | orchestrator | Wednesday 04 February 2026 00:50:54 +0000 (0:00:01.231) 0:04:20.372 **** 2026-02-04 00:52:33.422417 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422421 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422426 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422430 | orchestrator | 2026-02-04 00:52:33.422434 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 00:52:33.422438 | orchestrator | Wednesday 04 February 2026 00:50:56 +0000 (0:00:01.391) 0:04:21.764 **** 2026-02-04 00:52:33.422442 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.422447 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.422451 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.422455 | orchestrator | 2026-02-04 00:52:33.422459 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 00:52:33.422463 | orchestrator | Wednesday 04 February 2026 00:50:58 +0000 (0:00:02.273) 0:04:24.037 **** 2026-02-04 00:52:33.422468 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.422472 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.422476 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.422480 | orchestrator | 2026-02-04 00:52:33.422484 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-04 00:52:33.422489 | orchestrator | Wednesday 04 February 2026 00:51:01 +0000 (0:00:03.299) 0:04:27.336 **** 2026-02-04 00:52:33.422493 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.422497 | orchestrator | 2026-02-04 00:52:33.422501 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-04 00:52:33.422506 | orchestrator | Wednesday 04 February 2026 00:51:03 +0000 (0:00:01.544) 0:04:28.881 **** 2026-02-04 00:52:33.422510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.422520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:52:33.422528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.422556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.422564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:52:33.422569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.422590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.422595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:52:33.422599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.422619 | orchestrator | 2026-02-04 00:52:33.422623 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-04 00:52:33.422628 | orchestrator | Wednesday 04 February 2026 00:51:06 +0000 (0:00:03.103) 0:04:31.985 **** 2026-02-04 00:52:33.422635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.422640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:52:33.422645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.422667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.422671 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:52:33.422680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.422696 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.422710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 00:52:33.422718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 00:52:33.422727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 00:52:33.422734 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422738 | orchestrator | 2026-02-04 00:52:33.422743 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-04 00:52:33.422747 | orchestrator | Wednesday 04 February 2026 00:51:07 +0000 (0:00:00.663) 0:04:32.648 **** 2026-02-04 00:52:33.422752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:52:33.422757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:52:33.422762 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:52:33.422770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:52:33.422775 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:52:33.422783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 00:52:33.422788 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422792 | orchestrator | 2026-02-04 00:52:33.422799 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-04 00:52:33.422803 | orchestrator | Wednesday 04 February 2026 00:51:08 +0000 (0:00:01.175) 0:04:33.823 **** 2026-02-04 00:52:33.422807 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.422811 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.422816 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.422820 | orchestrator | 2026-02-04 00:52:33.422824 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-04 00:52:33.422828 | orchestrator | Wednesday 04 February 2026 00:51:09 +0000 (0:00:01.406) 0:04:35.230 **** 2026-02-04 00:52:33.422832 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.422837 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.422841 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.422845 | orchestrator | 2026-02-04 00:52:33.422849 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-04 00:52:33.422856 | orchestrator | Wednesday 04 February 2026 00:51:11 +0000 (0:00:01.920) 0:04:37.151 **** 2026-02-04 00:52:33.422860 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.422865 | orchestrator | 2026-02-04 00:52:33.422869 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-04 00:52:33.422873 | orchestrator | Wednesday 04 February 2026 00:51:12 +0000 (0:00:01.247) 0:04:38.398 **** 2026-02-04 00:52:33.422877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:52:33.422885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:52:33.422890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:52:33.422898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:52:33.422906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:52:33.422915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:52:33.422919 | orchestrator | 2026-02-04 00:52:33.422924 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-04 00:52:33.422928 | orchestrator | Wednesday 04 February 2026 00:51:18 +0000 (0:00:05.104) 0:04:43.503 **** 2026-02-04 00:52:33.422932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:52:33.422939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:52:33.422947 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.422951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:52:33.422959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:52:33.422963 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.422968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:52:33.422975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:52:33.422979 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.422984 | orchestrator | 2026-02-04 00:52:33.422991 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-04 00:52:33.422995 | orchestrator | Wednesday 04 February 2026 00:51:18 +0000 (0:00:00.555) 0:04:44.059 **** 2026-02-04 00:52:33.423002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 00:52:33.423007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:52:33.423011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:52:33.423016 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 00:52:33.423024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:52:33.423029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:52:33.423033 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 00:52:33.423042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:52:33.423046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 00:52:33.423052 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423058 | orchestrator | 2026-02-04 00:52:33.423065 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-04 00:52:33.423071 | orchestrator | Wednesday 04 February 2026 00:51:19 +0000 (0:00:00.822) 0:04:44.882 **** 2026-02-04 00:52:33.423082 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423090 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423097 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423103 | orchestrator | 2026-02-04 00:52:33.423110 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-04 00:52:33.423116 | orchestrator | Wednesday 04 February 2026 00:51:20 +0000 (0:00:00.621) 0:04:45.504 **** 2026-02-04 00:52:33.423123 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423129 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423134 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423140 | orchestrator | 2026-02-04 00:52:33.423146 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-04 00:52:33.423152 | orchestrator | Wednesday 04 February 2026 00:51:21 +0000 (0:00:01.109) 0:04:46.613 **** 2026-02-04 00:52:33.423159 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.423165 | orchestrator | 2026-02-04 00:52:33.423171 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-04 00:52:33.423183 | orchestrator | Wednesday 04 February 2026 00:51:22 +0000 (0:00:01.377) 0:04:47.991 **** 2026-02-04 00:52:33.423193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 00:52:33.423207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:52:33.423215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 00:52:33.423236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:52:33.423261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 00:52:33.423281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:52:33.423288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 00:52:33.423345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:52:33.423350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 00:52:33.423398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:52:33.423403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 00:52:33.423432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:52:33.423437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423450 | orchestrator | 2026-02-04 00:52:33.423454 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-04 00:52:33.423459 | orchestrator | Wednesday 04 February 2026 00:51:26 +0000 (0:00:03.854) 0:04:51.845 **** 2026-02-04 00:52:33.423463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 00:52:33.423472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:52:33.423479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 00:52:33.423501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:52:33.423509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423529 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 00:52:33.423538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:52:33.423542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 00:52:33.423617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:52:33.423623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 00:52:33.423635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 00:52:33.423644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423660 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 00:52:33.423698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 00:52:33.423705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 00:52:33.423716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 00:52:33.423720 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423725 | orchestrator | 2026-02-04 00:52:33.423729 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-04 00:52:33.423733 | orchestrator | Wednesday 04 February 2026 00:51:27 +0000 (0:00:01.009) 0:04:52.854 **** 2026-02-04 00:52:33.423738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 00:52:33.423743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 00:52:33.423752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:52:33.423757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 00:52:33.423761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 00:52:33.423766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:52:33.423771 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:52:33.423779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:52:33.423784 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 00:52:33.423792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 00:52:33.423799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:52:33.423804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 00:52:33.423808 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423812 | orchestrator | 2026-02-04 00:52:33.423817 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-04 00:52:33.423821 | orchestrator | Wednesday 04 February 2026 00:51:28 +0000 (0:00:00.897) 0:04:53.752 **** 2026-02-04 00:52:33.423827 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423832 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423836 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423840 | orchestrator | 2026-02-04 00:52:33.423844 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-04 00:52:33.423848 | orchestrator | Wednesday 04 February 2026 00:51:28 +0000 (0:00:00.411) 0:04:54.163 **** 2026-02-04 00:52:33.423856 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423860 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423864 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423868 | orchestrator | 2026-02-04 00:52:33.423873 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-04 00:52:33.423877 | orchestrator | Wednesday 04 February 2026 00:51:29 +0000 (0:00:01.198) 0:04:55.362 **** 2026-02-04 00:52:33.423881 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.423885 | orchestrator | 2026-02-04 00:52:33.423890 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-04 00:52:33.423894 | orchestrator | Wednesday 04 February 2026 00:51:31 +0000 (0:00:01.591) 0:04:56.954 **** 2026-02-04 00:52:33.423898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:52:33.423903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:52:33.423920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 00:52:33.423925 | orchestrator | 2026-02-04 00:52:33.423929 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-04 00:52:33.423933 | orchestrator | Wednesday 04 February 2026 00:51:33 +0000 (0:00:02.327) 0:04:59.281 **** 2026-02-04 00:52:33.423943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 00:52:33.423948 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 00:52:33.423957 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.423961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 00:52:33.423966 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.423973 | orchestrator | 2026-02-04 00:52:33.423977 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-04 00:52:33.423982 | orchestrator | Wednesday 04 February 2026 00:51:34 +0000 (0:00:00.355) 0:04:59.637 **** 2026-02-04 00:52:33.423986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 00:52:33.423990 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.423995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 00:52:33.424001 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 00:52:33.424014 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424018 | orchestrator | 2026-02-04 00:52:33.424023 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-04 00:52:33.424027 | orchestrator | Wednesday 04 February 2026 00:51:35 +0000 (0:00:00.909) 0:05:00.547 **** 2026-02-04 00:52:33.424031 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424035 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424039 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424044 | orchestrator | 2026-02-04 00:52:33.424048 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-04 00:52:33.424054 | orchestrator | Wednesday 04 February 2026 00:51:35 +0000 (0:00:00.384) 0:05:00.931 **** 2026-02-04 00:52:33.424058 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424063 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424067 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424071 | orchestrator | 2026-02-04 00:52:33.424075 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-04 00:52:33.424080 | orchestrator | Wednesday 04 February 2026 00:51:36 +0000 (0:00:01.095) 0:05:02.026 **** 2026-02-04 00:52:33.424084 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:52:33.424088 | orchestrator | 2026-02-04 00:52:33.424092 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-04 00:52:33.424096 | orchestrator | Wednesday 04 February 2026 00:51:38 +0000 (0:00:01.604) 0:05:03.631 **** 2026-02-04 00:52:33.424101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.424105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.424110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.424123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.424129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.424133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 00:52:33.424137 | orchestrator | 2026-02-04 00:52:33.424141 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-04 00:52:33.424146 | orchestrator | Wednesday 04 February 2026 00:51:43 +0000 (0:00:05.469) 0:05:09.100 **** 2026-02-04 00:52:33.424150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.424162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.424167 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.424176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.424181 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.424195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 00:52:33.424200 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424204 | orchestrator | 2026-02-04 00:52:33.424208 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-04 00:52:33.424214 | orchestrator | Wednesday 04 February 2026 00:51:44 +0000 (0:00:00.580) 0:05:09.681 **** 2026-02-04 00:52:33.424219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424236 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424258 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 00:52:33.424283 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424287 | orchestrator | 2026-02-04 00:52:33.424291 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-04 00:52:33.424296 | orchestrator | Wednesday 04 February 2026 00:51:45 +0000 (0:00:01.417) 0:05:11.098 **** 2026-02-04 00:52:33.424300 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.424304 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.424308 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.424312 | orchestrator | 2026-02-04 00:52:33.424316 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-04 00:52:33.424321 | orchestrator | Wednesday 04 February 2026 00:51:46 +0000 (0:00:01.211) 0:05:12.309 **** 2026-02-04 00:52:33.424325 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.424329 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.424333 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.424338 | orchestrator | 2026-02-04 00:52:33.424344 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-04 00:52:33.424348 | orchestrator | Wednesday 04 February 2026 00:51:48 +0000 (0:00:01.764) 0:05:14.074 **** 2026-02-04 00:52:33.424353 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424357 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424507 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424517 | orchestrator | 2026-02-04 00:52:33.424521 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-04 00:52:33.424526 | orchestrator | Wednesday 04 February 2026 00:51:48 +0000 (0:00:00.276) 0:05:14.350 **** 2026-02-04 00:52:33.424530 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424534 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424539 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424543 | orchestrator | 2026-02-04 00:52:33.424547 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-04 00:52:33.424557 | orchestrator | Wednesday 04 February 2026 00:51:49 +0000 (0:00:00.269) 0:05:14.619 **** 2026-02-04 00:52:33.424561 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424565 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424571 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424577 | orchestrator | 2026-02-04 00:52:33.424584 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-04 00:52:33.424591 | orchestrator | Wednesday 04 February 2026 00:51:49 +0000 (0:00:00.493) 0:05:15.113 **** 2026-02-04 00:52:33.424601 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424607 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424614 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424620 | orchestrator | 2026-02-04 00:52:33.424626 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-04 00:52:33.424633 | orchestrator | Wednesday 04 February 2026 00:51:49 +0000 (0:00:00.291) 0:05:15.405 **** 2026-02-04 00:52:33.424639 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424645 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424651 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424667 | orchestrator | 2026-02-04 00:52:33.424674 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-04 00:52:33.424681 | orchestrator | Wednesday 04 February 2026 00:51:50 +0000 (0:00:00.289) 0:05:15.695 **** 2026-02-04 00:52:33.424688 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.424695 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.424702 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.424709 | orchestrator | 2026-02-04 00:52:33.424715 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-04 00:52:33.424722 | orchestrator | Wednesday 04 February 2026 00:51:50 +0000 (0:00:00.665) 0:05:16.360 **** 2026-02-04 00:52:33.424728 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.424735 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.424744 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.424748 | orchestrator | 2026-02-04 00:52:33.424752 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-04 00:52:33.424757 | orchestrator | Wednesday 04 February 2026 00:51:51 +0000 (0:00:00.625) 0:05:16.986 **** 2026-02-04 00:52:33.424772 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.424776 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.424786 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.424790 | orchestrator | 2026-02-04 00:52:33.424795 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-04 00:52:33.424799 | orchestrator | Wednesday 04 February 2026 00:51:51 +0000 (0:00:00.300) 0:05:17.287 **** 2026-02-04 00:52:33.424803 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.424807 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.424811 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.424816 | orchestrator | 2026-02-04 00:52:33.424820 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-04 00:52:33.424824 | orchestrator | Wednesday 04 February 2026 00:51:52 +0000 (0:00:00.882) 0:05:18.169 **** 2026-02-04 00:52:33.424829 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.424833 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.424844 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.424849 | orchestrator | 2026-02-04 00:52:33.424853 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-04 00:52:33.424863 | orchestrator | Wednesday 04 February 2026 00:51:53 +0000 (0:00:00.928) 0:05:19.097 **** 2026-02-04 00:52:33.424867 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.424872 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.424876 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.424880 | orchestrator | 2026-02-04 00:52:33.424884 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-04 00:52:33.424888 | orchestrator | Wednesday 04 February 2026 00:51:54 +0000 (0:00:00.842) 0:05:19.939 **** 2026-02-04 00:52:33.424893 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.424897 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.424901 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.424905 | orchestrator | 2026-02-04 00:52:33.424910 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-04 00:52:33.424914 | orchestrator | Wednesday 04 February 2026 00:52:03 +0000 (0:00:09.402) 0:05:29.342 **** 2026-02-04 00:52:33.424918 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.424922 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.425064 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.425069 | orchestrator | 2026-02-04 00:52:33.425073 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-04 00:52:33.425077 | orchestrator | Wednesday 04 February 2026 00:52:04 +0000 (0:00:00.864) 0:05:30.206 **** 2026-02-04 00:52:33.425081 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.425084 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.425088 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.425092 | orchestrator | 2026-02-04 00:52:33.425096 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-04 00:52:33.425106 | orchestrator | Wednesday 04 February 2026 00:52:18 +0000 (0:00:13.959) 0:05:44.166 **** 2026-02-04 00:52:33.425110 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.425114 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.425118 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.425122 | orchestrator | 2026-02-04 00:52:33.425126 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-04 00:52:33.425133 | orchestrator | Wednesday 04 February 2026 00:52:19 +0000 (0:00:01.181) 0:05:45.347 **** 2026-02-04 00:52:33.425137 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:52:33.425141 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:52:33.425145 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:52:33.425149 | orchestrator | 2026-02-04 00:52:33.425153 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-04 00:52:33.425157 | orchestrator | Wednesday 04 February 2026 00:52:23 +0000 (0:00:03.930) 0:05:49.278 **** 2026-02-04 00:52:33.425161 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.425164 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.425168 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.425172 | orchestrator | 2026-02-04 00:52:33.425176 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-04 00:52:33.425180 | orchestrator | Wednesday 04 February 2026 00:52:24 +0000 (0:00:00.296) 0:05:49.574 **** 2026-02-04 00:52:33.425183 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.425192 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.425197 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.425200 | orchestrator | 2026-02-04 00:52:33.425204 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-04 00:52:33.425208 | orchestrator | Wednesday 04 February 2026 00:52:24 +0000 (0:00:00.323) 0:05:49.898 **** 2026-02-04 00:52:33.425212 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.425216 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.425219 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.425223 | orchestrator | 2026-02-04 00:52:33.425227 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-04 00:52:33.425231 | orchestrator | Wednesday 04 February 2026 00:52:24 +0000 (0:00:00.499) 0:05:50.397 **** 2026-02-04 00:52:33.425235 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.425239 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.425243 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.425246 | orchestrator | 2026-02-04 00:52:33.425250 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-04 00:52:33.425254 | orchestrator | Wednesday 04 February 2026 00:52:25 +0000 (0:00:00.305) 0:05:50.703 **** 2026-02-04 00:52:33.425258 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.425262 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.425266 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.425269 | orchestrator | 2026-02-04 00:52:33.425273 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-04 00:52:33.425277 | orchestrator | Wednesday 04 February 2026 00:52:25 +0000 (0:00:00.311) 0:05:51.014 **** 2026-02-04 00:52:33.425281 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:52:33.425285 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:52:33.425288 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:52:33.425292 | orchestrator | 2026-02-04 00:52:33.425296 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-04 00:52:33.425300 | orchestrator | Wednesday 04 February 2026 00:52:25 +0000 (0:00:00.336) 0:05:51.350 **** 2026-02-04 00:52:33.425304 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.425308 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.425312 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.425315 | orchestrator | 2026-02-04 00:52:33.425319 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-04 00:52:33.425323 | orchestrator | Wednesday 04 February 2026 00:52:30 +0000 (0:00:04.957) 0:05:56.308 **** 2026-02-04 00:52:33.425330 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:52:33.425334 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:52:33.425338 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:52:33.425341 | orchestrator | 2026-02-04 00:52:33.425345 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:52:33.425350 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 00:52:33.425354 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 00:52:33.425358 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 00:52:33.425375 | orchestrator | 2026-02-04 00:52:33.425378 | orchestrator | 2026-02-04 00:52:33.425383 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:52:33.425386 | orchestrator | Wednesday 04 February 2026 00:52:31 +0000 (0:00:00.881) 0:05:57.189 **** 2026-02-04 00:52:33.425390 | orchestrator | =============================================================================== 2026-02-04 00:52:33.425394 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.96s 2026-02-04 00:52:33.425398 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.40s 2026-02-04 00:52:33.425402 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.05s 2026-02-04 00:52:33.425406 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.47s 2026-02-04 00:52:33.425409 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.10s 2026-02-04 00:52:33.425413 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.96s 2026-02-04 00:52:33.425417 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.67s 2026-02-04 00:52:33.425421 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.55s 2026-02-04 00:52:33.425425 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.41s 2026-02-04 00:52:33.425429 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.33s 2026-02-04 00:52:33.425433 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.27s 2026-02-04 00:52:33.425436 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.26s 2026-02-04 00:52:33.425443 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.24s 2026-02-04 00:52:33.425447 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.23s 2026-02-04 00:52:33.425450 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 3.97s 2026-02-04 00:52:33.425454 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 3.93s 2026-02-04 00:52:33.425458 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.90s 2026-02-04 00:52:33.425462 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.85s 2026-02-04 00:52:33.425466 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.72s 2026-02-04 00:52:33.425470 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.68s 2026-02-04 00:52:33.425476 | orchestrator | 2026-02-04 00:52:33 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:33.425480 | orchestrator | 2026-02-04 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:36.461614 | orchestrator | 2026-02-04 00:52:36 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:36.463129 | orchestrator | 2026-02-04 00:52:36 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:36.464831 | orchestrator | 2026-02-04 00:52:36 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:36.465110 | orchestrator | 2026-02-04 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:39.496919 | orchestrator | 2026-02-04 00:52:39 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:39.497186 | orchestrator | 2026-02-04 00:52:39 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:39.499208 | orchestrator | 2026-02-04 00:52:39 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:39.499313 | orchestrator | 2026-02-04 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:42.539809 | orchestrator | 2026-02-04 00:52:42 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:42.540232 | orchestrator | 2026-02-04 00:52:42 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:42.540825 | orchestrator | 2026-02-04 00:52:42 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:42.540864 | orchestrator | 2026-02-04 00:52:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:45.562187 | orchestrator | 2026-02-04 00:52:45 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:45.562269 | orchestrator | 2026-02-04 00:52:45 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:45.563526 | orchestrator | 2026-02-04 00:52:45 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:45.563602 | orchestrator | 2026-02-04 00:52:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:48.589362 | orchestrator | 2026-02-04 00:52:48 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:48.591620 | orchestrator | 2026-02-04 00:52:48 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:48.591948 | orchestrator | 2026-02-04 00:52:48 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:48.591970 | orchestrator | 2026-02-04 00:52:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:51.615473 | orchestrator | 2026-02-04 00:52:51 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:51.616889 | orchestrator | 2026-02-04 00:52:51 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:51.618318 | orchestrator | 2026-02-04 00:52:51 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:51.618421 | orchestrator | 2026-02-04 00:52:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:54.648798 | orchestrator | 2026-02-04 00:52:54 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:54.650250 | orchestrator | 2026-02-04 00:52:54 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:54.652026 | orchestrator | 2026-02-04 00:52:54 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:54.652417 | orchestrator | 2026-02-04 00:52:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:52:57.674695 | orchestrator | 2026-02-04 00:52:57 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:52:57.678532 | orchestrator | 2026-02-04 00:52:57 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:52:57.679091 | orchestrator | 2026-02-04 00:52:57 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:52:57.679156 | orchestrator | 2026-02-04 00:52:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:00.712864 | orchestrator | 2026-02-04 00:53:00 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:00.714355 | orchestrator | 2026-02-04 00:53:00 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:00.717063 | orchestrator | 2026-02-04 00:53:00 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:00.717493 | orchestrator | 2026-02-04 00:53:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:03.757895 | orchestrator | 2026-02-04 00:53:03 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:03.757953 | orchestrator | 2026-02-04 00:53:03 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:03.759985 | orchestrator | 2026-02-04 00:53:03 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:03.760411 | orchestrator | 2026-02-04 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:06.809031 | orchestrator | 2026-02-04 00:53:06 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:06.810143 | orchestrator | 2026-02-04 00:53:06 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:06.812432 | orchestrator | 2026-02-04 00:53:06 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:06.812458 | orchestrator | 2026-02-04 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:09.856523 | orchestrator | 2026-02-04 00:53:09 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:09.858456 | orchestrator | 2026-02-04 00:53:09 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:09.859061 | orchestrator | 2026-02-04 00:53:09 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:09.859092 | orchestrator | 2026-02-04 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:12.908002 | orchestrator | 2026-02-04 00:53:12 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:12.909102 | orchestrator | 2026-02-04 00:53:12 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:12.910637 | orchestrator | 2026-02-04 00:53:12 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:12.911094 | orchestrator | 2026-02-04 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:15.950579 | orchestrator | 2026-02-04 00:53:15 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:15.952551 | orchestrator | 2026-02-04 00:53:15 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:15.953002 | orchestrator | 2026-02-04 00:53:15 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:15.953022 | orchestrator | 2026-02-04 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:18.991866 | orchestrator | 2026-02-04 00:53:18 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:18.992113 | orchestrator | 2026-02-04 00:53:18 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:18.992899 | orchestrator | 2026-02-04 00:53:18 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:18.992933 | orchestrator | 2026-02-04 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:22.038549 | orchestrator | 2026-02-04 00:53:22 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:22.039265 | orchestrator | 2026-02-04 00:53:22 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:22.040833 | orchestrator | 2026-02-04 00:53:22 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:22.040883 | orchestrator | 2026-02-04 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:25.083048 | orchestrator | 2026-02-04 00:53:25 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:25.084454 | orchestrator | 2026-02-04 00:53:25 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:25.085830 | orchestrator | 2026-02-04 00:53:25 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:25.086128 | orchestrator | 2026-02-04 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:28.129578 | orchestrator | 2026-02-04 00:53:28 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:28.132622 | orchestrator | 2026-02-04 00:53:28 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:28.135821 | orchestrator | 2026-02-04 00:53:28 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:28.135909 | orchestrator | 2026-02-04 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:31.177701 | orchestrator | 2026-02-04 00:53:31 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:31.178619 | orchestrator | 2026-02-04 00:53:31 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:31.180748 | orchestrator | 2026-02-04 00:53:31 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:31.180822 | orchestrator | 2026-02-04 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:34.216595 | orchestrator | 2026-02-04 00:53:34 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:34.218088 | orchestrator | 2026-02-04 00:53:34 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:34.219469 | orchestrator | 2026-02-04 00:53:34 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:34.219499 | orchestrator | 2026-02-04 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:37.264915 | orchestrator | 2026-02-04 00:53:37 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:37.266923 | orchestrator | 2026-02-04 00:53:37 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:37.268697 | orchestrator | 2026-02-04 00:53:37 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:37.268744 | orchestrator | 2026-02-04 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:40.313187 | orchestrator | 2026-02-04 00:53:40 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:40.315257 | orchestrator | 2026-02-04 00:53:40 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:40.317623 | orchestrator | 2026-02-04 00:53:40 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:40.317674 | orchestrator | 2026-02-04 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:43.369945 | orchestrator | 2026-02-04 00:53:43 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:43.371901 | orchestrator | 2026-02-04 00:53:43 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:43.374137 | orchestrator | 2026-02-04 00:53:43 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:43.374179 | orchestrator | 2026-02-04 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:46.394165 | orchestrator | 2026-02-04 00:53:46 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:46.394931 | orchestrator | 2026-02-04 00:53:46 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:46.395387 | orchestrator | 2026-02-04 00:53:46 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:46.395412 | orchestrator | 2026-02-04 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:49.432208 | orchestrator | 2026-02-04 00:53:49 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:49.432642 | orchestrator | 2026-02-04 00:53:49 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:49.433385 | orchestrator | 2026-02-04 00:53:49 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:49.433409 | orchestrator | 2026-02-04 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:52.469962 | orchestrator | 2026-02-04 00:53:52 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:52.472070 | orchestrator | 2026-02-04 00:53:52 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:52.474514 | orchestrator | 2026-02-04 00:53:52 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:52.474566 | orchestrator | 2026-02-04 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:55.513893 | orchestrator | 2026-02-04 00:53:55 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:55.515285 | orchestrator | 2026-02-04 00:53:55 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:55.516998 | orchestrator | 2026-02-04 00:53:55 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:55.517166 | orchestrator | 2026-02-04 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:53:58.556799 | orchestrator | 2026-02-04 00:53:58 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:53:58.557417 | orchestrator | 2026-02-04 00:53:58 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:53:58.559138 | orchestrator | 2026-02-04 00:53:58 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:53:58.559183 | orchestrator | 2026-02-04 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:01.586709 | orchestrator | 2026-02-04 00:54:01 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:01.587489 | orchestrator | 2026-02-04 00:54:01 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:01.588877 | orchestrator | 2026-02-04 00:54:01 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:01.589023 | orchestrator | 2026-02-04 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:04.629748 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:04.632436 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:04.634435 | orchestrator | 2026-02-04 00:54:04 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:04.634483 | orchestrator | 2026-02-04 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:07.675400 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:07.678079 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:07.680304 | orchestrator | 2026-02-04 00:54:07 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:07.680353 | orchestrator | 2026-02-04 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:10.720900 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:10.721983 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:10.723315 | orchestrator | 2026-02-04 00:54:10 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:10.723470 | orchestrator | 2026-02-04 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:13.769772 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:13.772032 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:13.774309 | orchestrator | 2026-02-04 00:54:13 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:13.774357 | orchestrator | 2026-02-04 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:16.814470 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:16.815786 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:16.817257 | orchestrator | 2026-02-04 00:54:16 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:16.817292 | orchestrator | 2026-02-04 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:19.872339 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:19.874237 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:19.876069 | orchestrator | 2026-02-04 00:54:19 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:19.876130 | orchestrator | 2026-02-04 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:22.922690 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:22.924692 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:22.928402 | orchestrator | 2026-02-04 00:54:22 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:22.928456 | orchestrator | 2026-02-04 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:25.964432 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:25.965478 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:25.971078 | orchestrator | 2026-02-04 00:54:25 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:25.971144 | orchestrator | 2026-02-04 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:29.012686 | orchestrator | 2026-02-04 00:54:29 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:29.014169 | orchestrator | 2026-02-04 00:54:29 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:29.015697 | orchestrator | 2026-02-04 00:54:29 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:29.015813 | orchestrator | 2026-02-04 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:32.062794 | orchestrator | 2026-02-04 00:54:32 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:32.065080 | orchestrator | 2026-02-04 00:54:32 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:32.066823 | orchestrator | 2026-02-04 00:54:32 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:32.066875 | orchestrator | 2026-02-04 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:35.110660 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:35.112143 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:35.113611 | orchestrator | 2026-02-04 00:54:35 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:35.113663 | orchestrator | 2026-02-04 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:38.158229 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:38.159778 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:38.162410 | orchestrator | 2026-02-04 00:54:38 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:38.162473 | orchestrator | 2026-02-04 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:41.216478 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:41.217904 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:41.218806 | orchestrator | 2026-02-04 00:54:41 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:41.218838 | orchestrator | 2026-02-04 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:44.260010 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:44.262114 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:44.263988 | orchestrator | 2026-02-04 00:54:44 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:44.264070 | orchestrator | 2026-02-04 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:47.307913 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:47.309124 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:47.310772 | orchestrator | 2026-02-04 00:54:47 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:47.310829 | orchestrator | 2026-02-04 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:50.358956 | orchestrator | 2026-02-04 00:54:50 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:50.360263 | orchestrator | 2026-02-04 00:54:50 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:50.361412 | orchestrator | 2026-02-04 00:54:50 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:50.361609 | orchestrator | 2026-02-04 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:53.412277 | orchestrator | 2026-02-04 00:54:53 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:53.413771 | orchestrator | 2026-02-04 00:54:53 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:53.415240 | orchestrator | 2026-02-04 00:54:53 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:53.415280 | orchestrator | 2026-02-04 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:56.464462 | orchestrator | 2026-02-04 00:54:56 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state STARTED 2026-02-04 00:54:56.465763 | orchestrator | 2026-02-04 00:54:56 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:56.467536 | orchestrator | 2026-02-04 00:54:56 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:56.467585 | orchestrator | 2026-02-04 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:54:59.512920 | orchestrator | 2026-02-04 00:54:59.512980 | orchestrator | 2026-02-04 00:54:59 | INFO  | Task d44fc869-3bfe-4430-8148-ba5382e5538a is in state SUCCESS 2026-02-04 00:54:59.513906 | orchestrator | 2026-02-04 00:54:59.513935 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:54:59.513942 | orchestrator | 2026-02-04 00:54:59.513947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:54:59.513952 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.227) 0:00:00.227 **** 2026-02-04 00:54:59.513957 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:59.513962 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:54:59.513968 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:54:59.513972 | orchestrator | 2026-02-04 00:54:59.513977 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:54:59.513982 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.260) 0:00:00.487 **** 2026-02-04 00:54:59.513987 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-04 00:54:59.513991 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-04 00:54:59.514076 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-04 00:54:59.514081 | orchestrator | 2026-02-04 00:54:59.514087 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-04 00:54:59.514094 | orchestrator | 2026-02-04 00:54:59.514100 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 00:54:59.514106 | orchestrator | Wednesday 04 February 2026 00:52:37 +0000 (0:00:00.338) 0:00:00.826 **** 2026-02-04 00:54:59.514113 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:59.514120 | orchestrator | 2026-02-04 00:54:59.514126 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-04 00:54:59.514132 | orchestrator | Wednesday 04 February 2026 00:52:37 +0000 (0:00:00.424) 0:00:01.251 **** 2026-02-04 00:54:59.514136 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:54:59.514195 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:54:59.514204 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 00:54:59.514209 | orchestrator | 2026-02-04 00:54:59.514215 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-04 00:54:59.514221 | orchestrator | Wednesday 04 February 2026 00:52:38 +0000 (0:00:00.670) 0:00:01.921 **** 2026-02-04 00:54:59.514245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514344 | orchestrator | 2026-02-04 00:54:59.514350 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 00:54:59.514356 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:01.581) 0:00:03.503 **** 2026-02-04 00:54:59.514363 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:59.514369 | orchestrator | 2026-02-04 00:54:59.514375 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-04 00:54:59.514381 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.479) 0:00:03.983 **** 2026-02-04 00:54:59.514394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514435 | orchestrator | 2026-02-04 00:54:59.514439 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-04 00:54:59.514443 | orchestrator | Wednesday 04 February 2026 00:52:42 +0000 (0:00:02.667) 0:00:06.650 **** 2026-02-04 00:54:59.514450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:54:59.514454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:54:59.514458 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:59.514467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:54:59.514471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:54:59.514479 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:59.514483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:54:59.514490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:54:59.514494 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:59.514498 | orchestrator | 2026-02-04 00:54:59.514502 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-04 00:54:59.514506 | orchestrator | Wednesday 04 February 2026 00:52:43 +0000 (0:00:01.000) 0:00:07.651 **** 2026-02-04 00:54:59.514513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:54:59.514521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:54:59.514525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:54:59.514529 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:59.514536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:54:59.514540 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:59.514547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 00:54:59.514554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 00:54:59.514559 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:59.514562 | orchestrator | 2026-02-04 00:54:59.514566 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-04 00:54:59.514570 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:00.968) 0:00:08.620 **** 2026-02-04 00:54:59.514574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514614 | orchestrator | 2026-02-04 00:54:59.514618 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-04 00:54:59.514622 | orchestrator | Wednesday 04 February 2026 00:52:47 +0000 (0:00:02.390) 0:00:11.011 **** 2026-02-04 00:54:59.514626 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:59.514630 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:59.514634 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:59.514638 | orchestrator | 2026-02-04 00:54:59.514641 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-04 00:54:59.514645 | orchestrator | Wednesday 04 February 2026 00:52:50 +0000 (0:00:02.814) 0:00:13.825 **** 2026-02-04 00:54:59.514653 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:59.514657 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:59.514661 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:59.514665 | orchestrator | 2026-02-04 00:54:59.514668 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-04 00:54:59.514672 | orchestrator | Wednesday 04 February 2026 00:52:52 +0000 (0:00:01.984) 0:00:15.810 **** 2026-02-04 00:54:59.514759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 00:54:59.514784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 00:54:59.514814 | orchestrator | 2026-02-04 00:54:59.514820 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 00:54:59.514825 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:02.258) 0:00:18.068 **** 2026-02-04 00:54:59.514831 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:59.514837 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:54:59.514843 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:54:59.514849 | orchestrator | 2026-02-04 00:54:59.514855 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 00:54:59.514860 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:00.258) 0:00:18.327 **** 2026-02-04 00:54:59.514866 | orchestrator | 2026-02-04 00:54:59.514872 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 00:54:59.514878 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:00.058) 0:00:18.386 **** 2026-02-04 00:54:59.514884 | orchestrator | 2026-02-04 00:54:59.514889 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 00:54:59.514895 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:00.058) 0:00:18.444 **** 2026-02-04 00:54:59.514902 | orchestrator | 2026-02-04 00:54:59.514907 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-04 00:54:59.514917 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:00.060) 0:00:18.505 **** 2026-02-04 00:54:59.514923 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:59.514928 | orchestrator | 2026-02-04 00:54:59.514935 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-04 00:54:59.514941 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:00.190) 0:00:18.696 **** 2026-02-04 00:54:59.514946 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:54:59.514958 | orchestrator | 2026-02-04 00:54:59.514964 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-04 00:54:59.514970 | orchestrator | Wednesday 04 February 2026 00:52:55 +0000 (0:00:00.446) 0:00:19.142 **** 2026-02-04 00:54:59.514976 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:59.514982 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:59.514990 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:59.514994 | orchestrator | 2026-02-04 00:54:59.514998 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-04 00:54:59.515002 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:49.089) 0:01:08.232 **** 2026-02-04 00:54:59.515006 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:59.515010 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:54:59.515014 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:54:59.515017 | orchestrator | 2026-02-04 00:54:59.515021 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 00:54:59.515025 | orchestrator | Wednesday 04 February 2026 00:54:45 +0000 (0:01:00.721) 0:02:08.953 **** 2026-02-04 00:54:59.515029 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:54:59.515034 | orchestrator | 2026-02-04 00:54:59.515040 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-04 00:54:59.515046 | orchestrator | Wednesday 04 February 2026 00:54:45 +0000 (0:00:00.649) 0:02:09.603 **** 2026-02-04 00:54:59.515052 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:59.515059 | orchestrator | 2026-02-04 00:54:59.515065 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-04 00:54:59.515070 | orchestrator | Wednesday 04 February 2026 00:54:48 +0000 (0:00:02.549) 0:02:12.152 **** 2026-02-04 00:54:59.515077 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:54:59.515083 | orchestrator | 2026-02-04 00:54:59.515089 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-04 00:54:59.515096 | orchestrator | Wednesday 04 February 2026 00:54:50 +0000 (0:00:02.364) 0:02:14.517 **** 2026-02-04 00:54:59.515102 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:59.515108 | orchestrator | 2026-02-04 00:54:59.515114 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-04 00:54:59.515120 | orchestrator | Wednesday 04 February 2026 00:54:53 +0000 (0:00:02.813) 0:02:17.331 **** 2026-02-04 00:54:59.515124 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:54:59.515128 | orchestrator | 2026-02-04 00:54:59.515135 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:54:59.515141 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 00:54:59.515188 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:54:59.515193 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 00:54:59.515197 | orchestrator | 2026-02-04 00:54:59.515201 | orchestrator | 2026-02-04 00:54:59.515205 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:54:59.515208 | orchestrator | Wednesday 04 February 2026 00:54:56 +0000 (0:00:02.571) 0:02:19.902 **** 2026-02-04 00:54:59.515212 | orchestrator | =============================================================================== 2026-02-04 00:54:59.515216 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 60.72s 2026-02-04 00:54:59.515220 | orchestrator | opensearch : Restart opensearch container ------------------------------ 49.09s 2026-02-04 00:54:59.515224 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.81s 2026-02-04 00:54:59.515227 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.81s 2026-02-04 00:54:59.515236 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.67s 2026-02-04 00:54:59.515240 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.57s 2026-02-04 00:54:59.515244 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.55s 2026-02-04 00:54:59.515248 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2026-02-04 00:54:59.515251 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.36s 2026-02-04 00:54:59.515255 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.26s 2026-02-04 00:54:59.515259 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.98s 2026-02-04 00:54:59.515263 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.58s 2026-02-04 00:54:59.515267 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.00s 2026-02-04 00:54:59.515270 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.97s 2026-02-04 00:54:59.515276 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-02-04 00:54:59.515283 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.65s 2026-02-04 00:54:59.515289 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-02-04 00:54:59.515299 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.45s 2026-02-04 00:54:59.515305 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.42s 2026-02-04 00:54:59.515311 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.34s 2026-02-04 00:54:59.515688 | orchestrator | 2026-02-04 00:54:59 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:54:59.516861 | orchestrator | 2026-02-04 00:54:59 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:54:59.516889 | orchestrator | 2026-02-04 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:02.562956 | orchestrator | 2026-02-04 00:55:02 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:02.564309 | orchestrator | 2026-02-04 00:55:02 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:55:02.564547 | orchestrator | 2026-02-04 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:05.598952 | orchestrator | 2026-02-04 00:55:05 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:05.600050 | orchestrator | 2026-02-04 00:55:05 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:55:05.600283 | orchestrator | 2026-02-04 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:08.643519 | orchestrator | 2026-02-04 00:55:08 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:08.644359 | orchestrator | 2026-02-04 00:55:08 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:55:08.644396 | orchestrator | 2026-02-04 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:11.678800 | orchestrator | 2026-02-04 00:55:11 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:11.679774 | orchestrator | 2026-02-04 00:55:11 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state STARTED 2026-02-04 00:55:11.680699 | orchestrator | 2026-02-04 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:14.728653 | orchestrator | 2026-02-04 00:55:14 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:14.729277 | orchestrator | 2026-02-04 00:55:14 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:14.734266 | orchestrator | 2026-02-04 00:55:14 | INFO  | Task 227495c6-aec9-44a3-8e31-96a65f9ed65b is in state SUCCESS 2026-02-04 00:55:14.736501 | orchestrator | 2026-02-04 00:55:14.736552 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 00:55:14.736567 | orchestrator | 2.16.14 2026-02-04 00:55:14.736575 | orchestrator | 2026-02-04 00:55:14.736583 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-04 00:55:14.736591 | orchestrator | 2026-02-04 00:55:14.736598 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 00:55:14.736605 | orchestrator | Wednesday 04 February 2026 00:44:28 +0000 (0:00:00.735) 0:00:00.735 **** 2026-02-04 00:55:14.736613 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.736621 | orchestrator | 2026-02-04 00:55:14.736628 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 00:55:14.736635 | orchestrator | Wednesday 04 February 2026 00:44:29 +0000 (0:00:01.059) 0:00:01.795 **** 2026-02-04 00:55:14.736641 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.736648 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.736655 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.736661 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.736668 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.736675 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.736681 | orchestrator | 2026-02-04 00:55:14.736688 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 00:55:14.736695 | orchestrator | Wednesday 04 February 2026 00:44:31 +0000 (0:00:01.466) 0:00:03.261 **** 2026-02-04 00:55:14.736701 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.736708 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.736715 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.736721 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.736728 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.736735 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.736741 | orchestrator | 2026-02-04 00:55:14.736748 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 00:55:14.736832 | orchestrator | Wednesday 04 February 2026 00:44:32 +0000 (0:00:00.853) 0:00:04.115 **** 2026-02-04 00:55:14.736839 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.736846 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.736852 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.736858 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.736863 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.736869 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.736875 | orchestrator | 2026-02-04 00:55:14.736902 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 00:55:14.736909 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:00.895) 0:00:05.010 **** 2026-02-04 00:55:14.736915 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.736921 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.736927 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.736948 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.736955 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.736961 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.737026 | orchestrator | 2026-02-04 00:55:14.737036 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 00:55:14.737044 | orchestrator | Wednesday 04 February 2026 00:44:33 +0000 (0:00:00.803) 0:00:05.813 **** 2026-02-04 00:55:14.737050 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.737056 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.737061 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.737068 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.737074 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.737080 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.737105 | orchestrator | 2026-02-04 00:55:14.737111 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 00:55:14.737117 | orchestrator | Wednesday 04 February 2026 00:44:34 +0000 (0:00:00.464) 0:00:06.278 **** 2026-02-04 00:55:14.737147 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.737153 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.737159 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.737164 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.737171 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.737176 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.737183 | orchestrator | 2026-02-04 00:55:14.737189 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 00:55:14.737196 | orchestrator | Wednesday 04 February 2026 00:44:35 +0000 (0:00:01.063) 0:00:07.341 **** 2026-02-04 00:55:14.737201 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.737208 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.737214 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.737220 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.737225 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.737239 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.737247 | orchestrator | 2026-02-04 00:55:14.737253 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 00:55:14.737261 | orchestrator | Wednesday 04 February 2026 00:44:36 +0000 (0:00:01.082) 0:00:08.424 **** 2026-02-04 00:55:14.737268 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.737275 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.737282 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.737288 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.737295 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.737302 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.737308 | orchestrator | 2026-02-04 00:55:14.737315 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 00:55:14.737321 | orchestrator | Wednesday 04 February 2026 00:44:37 +0000 (0:00:01.033) 0:00:09.457 **** 2026-02-04 00:55:14.737327 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:55:14.737334 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:55:14.737340 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:55:14.737345 | orchestrator | 2026-02-04 00:55:14.737351 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 00:55:14.737358 | orchestrator | Wednesday 04 February 2026 00:44:38 +0000 (0:00:00.677) 0:00:10.135 **** 2026-02-04 00:55:14.737364 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.737370 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.737376 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.737396 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.737402 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.737409 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.737415 | orchestrator | 2026-02-04 00:55:14.737421 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 00:55:14.737428 | orchestrator | Wednesday 04 February 2026 00:44:39 +0000 (0:00:01.490) 0:00:11.626 **** 2026-02-04 00:55:14.737434 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:55:14.737441 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:55:14.737474 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:55:14.737482 | orchestrator | 2026-02-04 00:55:14.737488 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 00:55:14.737494 | orchestrator | Wednesday 04 February 2026 00:44:43 +0000 (0:00:03.433) 0:00:15.059 **** 2026-02-04 00:55:14.737501 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 00:55:14.737508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 00:55:14.737590 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 00:55:14.737624 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.737631 | orchestrator | 2026-02-04 00:55:14.737637 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 00:55:14.737644 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:00.859) 0:00:15.919 **** 2026-02-04 00:55:14.737707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737731 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.737738 | orchestrator | 2026-02-04 00:55:14.737751 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 00:55:14.737768 | orchestrator | Wednesday 04 February 2026 00:44:44 +0000 (0:00:00.830) 0:00:16.749 **** 2026-02-04 00:55:14.737784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737806 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.737811 | orchestrator | 2026-02-04 00:55:14.737817 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 00:55:14.737823 | orchestrator | Wednesday 04 February 2026 00:44:45 +0000 (0:00:00.510) 0:00:17.260 **** 2026-02-04 00:55:14.737838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 00:44:40.441283', 'end': '2026-02-04 00:44:40.803956', 'delta': '0:00:00.362673', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 00:44:41.827386', 'end': '2026-02-04 00:44:42.125593', 'delta': '0:00:00.298207', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737879 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 00:44:42.835324', 'end': '2026-02-04 00:44:43.103306', 'delta': '0:00:00.267982', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.737885 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.737891 | orchestrator | 2026-02-04 00:55:14.737897 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 00:55:14.737903 | orchestrator | Wednesday 04 February 2026 00:44:45 +0000 (0:00:00.165) 0:00:17.425 **** 2026-02-04 00:55:14.737909 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.737915 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.737925 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.737932 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.737938 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.737944 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.737951 | orchestrator | 2026-02-04 00:55:14.737957 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 00:55:14.737964 | orchestrator | Wednesday 04 February 2026 00:44:47 +0000 (0:00:01.637) 0:00:19.062 **** 2026-02-04 00:55:14.737970 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.737976 | orchestrator | 2026-02-04 00:55:14.737983 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 00:55:14.737989 | orchestrator | Wednesday 04 February 2026 00:44:47 +0000 (0:00:00.686) 0:00:19.749 **** 2026-02-04 00:55:14.737996 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738002 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738008 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738142 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738150 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738157 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738164 | orchestrator | 2026-02-04 00:55:14.738171 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 00:55:14.738178 | orchestrator | Wednesday 04 February 2026 00:44:49 +0000 (0:00:01.184) 0:00:20.934 **** 2026-02-04 00:55:14.738185 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738192 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738199 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738207 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738214 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738221 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738227 | orchestrator | 2026-02-04 00:55:14.738234 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 00:55:14.738242 | orchestrator | Wednesday 04 February 2026 00:44:51 +0000 (0:00:01.999) 0:00:22.933 **** 2026-02-04 00:55:14.738249 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738314 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738321 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738328 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738335 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738341 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738348 | orchestrator | 2026-02-04 00:55:14.738354 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 00:55:14.738361 | orchestrator | Wednesday 04 February 2026 00:44:51 +0000 (0:00:00.827) 0:00:23.760 **** 2026-02-04 00:55:14.738368 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738375 | orchestrator | 2026-02-04 00:55:14.738382 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 00:55:14.738390 | orchestrator | Wednesday 04 February 2026 00:44:52 +0000 (0:00:00.080) 0:00:23.841 **** 2026-02-04 00:55:14.738397 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738404 | orchestrator | 2026-02-04 00:55:14.738411 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 00:55:14.738418 | orchestrator | Wednesday 04 February 2026 00:44:52 +0000 (0:00:00.230) 0:00:24.072 **** 2026-02-04 00:55:14.738425 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738432 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738439 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738453 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738460 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738467 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738474 | orchestrator | 2026-02-04 00:55:14.738481 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 00:55:14.738489 | orchestrator | Wednesday 04 February 2026 00:44:52 +0000 (0:00:00.548) 0:00:24.620 **** 2026-02-04 00:55:14.738495 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738503 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738510 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738517 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738524 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738531 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738538 | orchestrator | 2026-02-04 00:55:14.738545 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 00:55:14.738552 | orchestrator | Wednesday 04 February 2026 00:44:53 +0000 (0:00:00.734) 0:00:25.354 **** 2026-02-04 00:55:14.738559 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738566 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738573 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738580 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738587 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738624 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738629 | orchestrator | 2026-02-04 00:55:14.738635 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 00:55:14.738641 | orchestrator | Wednesday 04 February 2026 00:44:54 +0000 (0:00:00.603) 0:00:25.958 **** 2026-02-04 00:55:14.738647 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738653 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738659 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738665 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738671 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738677 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738682 | orchestrator | 2026-02-04 00:55:14.738688 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 00:55:14.738694 | orchestrator | Wednesday 04 February 2026 00:44:55 +0000 (0:00:01.020) 0:00:26.979 **** 2026-02-04 00:55:14.738700 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738707 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738714 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738720 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738732 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738766 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738773 | orchestrator | 2026-02-04 00:55:14.738780 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 00:55:14.738787 | orchestrator | Wednesday 04 February 2026 00:44:55 +0000 (0:00:00.533) 0:00:27.512 **** 2026-02-04 00:55:14.738794 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738805 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738812 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738873 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738881 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738888 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738894 | orchestrator | 2026-02-04 00:55:14.738902 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 00:55:14.738908 | orchestrator | Wednesday 04 February 2026 00:44:56 +0000 (0:00:00.668) 0:00:28.180 **** 2026-02-04 00:55:14.738915 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.738922 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.738929 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.738936 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.738943 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.738949 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.738956 | orchestrator | 2026-02-04 00:55:14.738963 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 00:55:14.738970 | orchestrator | Wednesday 04 February 2026 00:44:56 +0000 (0:00:00.487) 0:00:28.668 **** 2026-02-04 00:55:14.738978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4', 'dm-uuid-LVM-futtSpiu2Dc6zeEwlRIqGKxk2240GEq2NDItB2Yekp0j5JSGwBE6yhTovNBjHOIV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.738989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031', 'dm-uuid-LVM-auzvDlBNDf4L39V45seqETFBTe0hlfpeBqlD6kkCsuNwd2IcE42BuaoOpC4zPjAE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739019 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7', 'dm-uuid-LVM-NTd3wVqFaLZs0HHLMiyjtJ62L05RnYUwQ92nicsvk9XmhXeB6EY8l1ES0A9vlzPg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739065 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739072 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd', 'dm-uuid-LVM-7ccqDb2IMlGvbROgddrBNTB0o1Up1e8jKjBNkdmhEIPN7p0IyTvtaslg9ZIPMZAL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739114 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5yak5L-o4at-xQ2L-P6UC-hZvx-2Sm1-YoLKVV', 'scsi-0QEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8', 'scsi-SQEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739156 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J1Zsmh-e107-W6nI-zKJc-WW2R-CulX-Lhjb6v', 'scsi-0QEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4', 'scsi-SQEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81', 'scsi-SQEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739252 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FO9Ylb-oyya-bgzx-QKlN-HkEC-gQ2h-HRhlzY', 'scsi-0QEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74', 'scsi-SQEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bq1yaU-lh82-MUro-hneI-alZs-sfZu-Db2wDT', 'scsi-0QEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a', 'scsi-SQEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde', 'scsi-SQEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9', 'dm-uuid-LVM-6y4w4ArVi5D1tyooWsj9aIJCekc2S7nLhYC0RCkddpwlSQuyk6aIosyXEoEqPeQY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df', 'dm-uuid-LVM-3afIFmYJiFa9RqNchm5P6Eeh4oUATUr9E8CsbzdSyxtBuOKMUchzqSt7IIBMTCOu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739477 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739492 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739524 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739532 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.739545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xqFisJ-bzmJ-mbhN-Vi30-8HQb-PT81-9BRMzc', 'scsi-0QEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e', 'scsi-SQEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niF0yj-rLoA-623U-KMw1-I2na-LHzi-DZgykD', 'scsi-0QEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a', 'scsi-SQEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59', 'scsi-SQEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739776 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.739783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739823 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.739829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739844 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.739850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part1', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part14', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part15', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part16', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.739912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:55:14.739966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part1', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part14', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part15', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part16', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:55:14.739992 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.739999 | orchestrator | 2026-02-04 00:55:14.740006 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 00:55:14.740036 | orchestrator | Wednesday 04 February 2026 00:44:58 +0000 (0:00:01.712) 0:00:30.380 **** 2026-02-04 00:55:14.740045 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4', 'dm-uuid-LVM-futtSpiu2Dc6zeEwlRIqGKxk2240GEq2NDItB2Yekp0j5JSGwBE6yhTovNBjHOIV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740057 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031', 'dm-uuid-LVM-auzvDlBNDf4L39V45seqETFBTe0hlfpeBqlD6kkCsuNwd2IcE42BuaoOpC4zPjAE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740173 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7', 'dm-uuid-LVM-NTd3wVqFaLZs0HHLMiyjtJ62L05RnYUwQ92nicsvk9XmhXeB6EY8l1ES0A9vlzPg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740233 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd', 'dm-uuid-LVM-7ccqDb2IMlGvbROgddrBNTB0o1Up1e8jKjBNkdmhEIPN7p0IyTvtaslg9ZIPMZAL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5yak5L-o4at-xQ2L-P6UC-hZvx-2Sm1-YoLKVV', 'scsi-0QEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8', 'scsi-SQEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740279 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J1Zsmh-e107-W6nI-zKJc-WW2R-CulX-Lhjb6v', 'scsi-0QEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4', 'scsi-SQEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740297 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740400 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81', 'scsi-SQEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740484 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9', 'dm-uuid-LVM-6y4w4ArVi5D1tyooWsj9aIJCekc2S7nLhYC0RCkddpwlSQuyk6aIosyXEoEqPeQY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.740491 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df', 'dm-uuid-LVM-3afIFmYJiFa9RqNchm5P6Eeh4oUATUr9E8CsbzdSyxtBuOKMUchzqSt7IIBMTCOu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741066 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FO9Ylb-oyya-bgzx-QKlN-HkEC-gQ2h-HRhlzY', 'scsi-0QEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74', 'scsi-SQEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741094 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bq1yaU-lh82-MUro-hneI-alZs-sfZu-Db2wDT', 'scsi-0QEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a', 'scsi-SQEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde', 'scsi-SQEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741154 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741169 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.741181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741211 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741219 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741260 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741272 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741279 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741286 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.741293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741307 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741328 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741342 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xqFisJ-bzmJ-mbhN-Vi30-8HQb-PT81-9BRMzc', 'scsi-0QEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e', 'scsi-SQEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741349 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741361 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part1', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part14', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part15', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part16', 'scsi-SQEMU_QEMU_HARDDISK_6f34b523-bd6d-4929-b1b4-04af8dcf542b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741368 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741383 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741391 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niF0yj-rLoA-623U-KMw1-I2na-LHzi-DZgykD', 'scsi-0QEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a', 'scsi-SQEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741398 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741409 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59', 'scsi-SQEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741416 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741431 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741438 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741445 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741463 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741471 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741486 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part1', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part14', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part15', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part16', 'scsi-SQEMU_QEMU_HARDDISK_80f63be7-f780-4338-9bbf-82469273ebcd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741493 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.741503 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-17-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741510 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.741517 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.741524 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741536 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741549 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741556 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741563 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741570 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741581 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741597 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741608 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part1', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part14', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part15', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part16', 'scsi-SQEMU_QEMU_HARDDISK_a77de84b-d98a-4a0a-b405-24b611969fa7-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741616 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:55:14.741623 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.741630 | orchestrator | 2026-02-04 00:55:14.741640 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 00:55:14.741649 | orchestrator | Wednesday 04 February 2026 00:44:59 +0000 (0:00:01.376) 0:00:31.757 **** 2026-02-04 00:55:14.741660 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.741668 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.741675 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.741683 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.741691 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.741698 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.741706 | orchestrator | 2026-02-04 00:55:14.741713 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 00:55:14.741721 | orchestrator | Wednesday 04 February 2026 00:45:01 +0000 (0:00:01.241) 0:00:32.998 **** 2026-02-04 00:55:14.741728 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.741736 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.741743 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.741751 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.741758 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.741766 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.741774 | orchestrator | 2026-02-04 00:55:14.741781 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 00:55:14.741789 | orchestrator | Wednesday 04 February 2026 00:45:01 +0000 (0:00:00.494) 0:00:33.492 **** 2026-02-04 00:55:14.741796 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.741804 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.741812 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.741819 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.741827 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.741834 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.741842 | orchestrator | 2026-02-04 00:55:14.741849 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 00:55:14.741857 | orchestrator | Wednesday 04 February 2026 00:45:02 +0000 (0:00:00.813) 0:00:34.305 **** 2026-02-04 00:55:14.741865 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.741872 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.741880 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.741888 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.741895 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.741903 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.741910 | orchestrator | 2026-02-04 00:55:14.741918 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 00:55:14.741925 | orchestrator | Wednesday 04 February 2026 00:45:03 +0000 (0:00:00.716) 0:00:35.022 **** 2026-02-04 00:55:14.741933 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.741941 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.741948 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.741956 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.741963 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.741970 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.741977 | orchestrator | 2026-02-04 00:55:14.741988 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 00:55:14.741995 | orchestrator | Wednesday 04 February 2026 00:45:04 +0000 (0:00:01.257) 0:00:36.279 **** 2026-02-04 00:55:14.742002 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742009 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.742060 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.742066 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.742073 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.742079 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.742086 | orchestrator | 2026-02-04 00:55:14.742092 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 00:55:14.742099 | orchestrator | Wednesday 04 February 2026 00:45:04 +0000 (0:00:00.500) 0:00:36.780 **** 2026-02-04 00:55:14.742106 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 00:55:14.742113 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 00:55:14.742135 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 00:55:14.742147 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 00:55:14.742153 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 00:55:14.742159 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 00:55:14.742166 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 00:55:14.742173 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 00:55:14.742179 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 00:55:14.742186 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 00:55:14.742192 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-04 00:55:14.742199 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-04 00:55:14.742206 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 00:55:14.742213 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-04 00:55:14.742220 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-04 00:55:14.742227 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 00:55:14.742233 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-04 00:55:14.742240 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-04 00:55:14.742247 | orchestrator | 2026-02-04 00:55:14.742254 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 00:55:14.742261 | orchestrator | Wednesday 04 February 2026 00:45:08 +0000 (0:00:03.783) 0:00:40.564 **** 2026-02-04 00:55:14.742268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 00:55:14.742275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 00:55:14.742282 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 00:55:14.742289 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 00:55:14.742295 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 00:55:14.742302 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 00:55:14.742309 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742316 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 00:55:14.742327 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 00:55:14.742334 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 00:55:14.742341 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.742347 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.742354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:55:14.742361 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 00:55:14.742367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:55:14.742374 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 00:55:14.742381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:55:14.742388 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 00:55:14.742395 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.742401 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 00:55:14.742408 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 00:55:14.742415 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.742421 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 00:55:14.742428 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.742434 | orchestrator | 2026-02-04 00:55:14.742441 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 00:55:14.742448 | orchestrator | Wednesday 04 February 2026 00:45:09 +0000 (0:00:00.756) 0:00:41.321 **** 2026-02-04 00:55:14.742455 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.742462 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.742469 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.742475 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.742487 | orchestrator | 2026-02-04 00:55:14.742494 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 00:55:14.742501 | orchestrator | Wednesday 04 February 2026 00:45:10 +0000 (0:00:00.973) 0:00:42.295 **** 2026-02-04 00:55:14.742508 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742515 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.742522 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.742529 | orchestrator | 2026-02-04 00:55:14.742535 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 00:55:14.742542 | orchestrator | Wednesday 04 February 2026 00:45:10 +0000 (0:00:00.335) 0:00:42.631 **** 2026-02-04 00:55:14.742549 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742556 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.742563 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.742569 | orchestrator | 2026-02-04 00:55:14.742579 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 00:55:14.742586 | orchestrator | Wednesday 04 February 2026 00:45:11 +0000 (0:00:00.427) 0:00:43.058 **** 2026-02-04 00:55:14.742592 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742599 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.742606 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.742613 | orchestrator | 2026-02-04 00:55:14.742619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 00:55:14.742626 | orchestrator | Wednesday 04 February 2026 00:45:12 +0000 (0:00:01.018) 0:00:44.076 **** 2026-02-04 00:55:14.742633 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.742640 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.742646 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.742653 | orchestrator | 2026-02-04 00:55:14.742659 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 00:55:14.742666 | orchestrator | Wednesday 04 February 2026 00:45:12 +0000 (0:00:00.528) 0:00:44.605 **** 2026-02-04 00:55:14.742673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.742680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.742686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.742693 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742700 | orchestrator | 2026-02-04 00:55:14.742707 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 00:55:14.742713 | orchestrator | Wednesday 04 February 2026 00:45:13 +0000 (0:00:00.523) 0:00:45.128 **** 2026-02-04 00:55:14.742720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.742727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.742733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.742740 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742746 | orchestrator | 2026-02-04 00:55:14.742753 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 00:55:14.742760 | orchestrator | Wednesday 04 February 2026 00:45:13 +0000 (0:00:00.568) 0:00:45.697 **** 2026-02-04 00:55:14.742767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.742774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.742780 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.742786 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.742791 | orchestrator | 2026-02-04 00:55:14.742798 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 00:55:14.742804 | orchestrator | Wednesday 04 February 2026 00:45:14 +0000 (0:00:00.424) 0:00:46.122 **** 2026-02-04 00:55:14.742810 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.742816 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.742827 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.742833 | orchestrator | 2026-02-04 00:55:14.742840 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 00:55:14.742847 | orchestrator | Wednesday 04 February 2026 00:45:14 +0000 (0:00:00.323) 0:00:46.445 **** 2026-02-04 00:55:14.742854 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 00:55:14.742860 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 00:55:14.742878 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 00:55:14.742885 | orchestrator | 2026-02-04 00:55:14.742892 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 00:55:14.742899 | orchestrator | Wednesday 04 February 2026 00:45:15 +0000 (0:00:00.774) 0:00:47.219 **** 2026-02-04 00:55:14.742905 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:55:14.742913 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:55:14.742920 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:55:14.742927 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 00:55:14.742933 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 00:55:14.742940 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 00:55:14.742947 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 00:55:14.742954 | orchestrator | 2026-02-04 00:55:14.742960 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 00:55:14.742967 | orchestrator | Wednesday 04 February 2026 00:45:16 +0000 (0:00:00.737) 0:00:47.957 **** 2026-02-04 00:55:14.742974 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:55:14.742981 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:55:14.742988 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:55:14.742994 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 00:55:14.743001 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 00:55:14.743008 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 00:55:14.743015 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 00:55:14.743021 | orchestrator | 2026-02-04 00:55:14.743028 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.743035 | orchestrator | Wednesday 04 February 2026 00:45:18 +0000 (0:00:02.049) 0:00:50.007 **** 2026-02-04 00:55:14.743046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.743055 | orchestrator | 2026-02-04 00:55:14.743061 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.743068 | orchestrator | Wednesday 04 February 2026 00:45:19 +0000 (0:00:01.350) 0:00:51.358 **** 2026-02-04 00:55:14.743075 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.743082 | orchestrator | 2026-02-04 00:55:14.743089 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.743095 | orchestrator | Wednesday 04 February 2026 00:45:20 +0000 (0:00:01.010) 0:00:52.368 **** 2026-02-04 00:55:14.743102 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.743109 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.743116 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.743161 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.743174 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.743181 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.743188 | orchestrator | 2026-02-04 00:55:14.743194 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.743201 | orchestrator | Wednesday 04 February 2026 00:45:21 +0000 (0:00:01.092) 0:00:53.461 **** 2026-02-04 00:55:14.743208 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743214 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743221 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743228 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743234 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743240 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743246 | orchestrator | 2026-02-04 00:55:14.743252 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.743258 | orchestrator | Wednesday 04 February 2026 00:45:22 +0000 (0:00:01.159) 0:00:54.620 **** 2026-02-04 00:55:14.743264 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743271 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743277 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743284 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743291 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743298 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743304 | orchestrator | 2026-02-04 00:55:14.743311 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.743317 | orchestrator | Wednesday 04 February 2026 00:45:23 +0000 (0:00:00.769) 0:00:55.389 **** 2026-02-04 00:55:14.743324 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743330 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743337 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743344 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743350 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743357 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743364 | orchestrator | 2026-02-04 00:55:14.743370 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.743377 | orchestrator | Wednesday 04 February 2026 00:45:24 +0000 (0:00:00.717) 0:00:56.107 **** 2026-02-04 00:55:14.743383 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.743390 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.743397 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.743403 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.743410 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.743422 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.743429 | orchestrator | 2026-02-04 00:55:14.743436 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.743443 | orchestrator | Wednesday 04 February 2026 00:45:25 +0000 (0:00:01.099) 0:00:57.206 **** 2026-02-04 00:55:14.743449 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.743456 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.743463 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.743470 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743476 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743483 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743489 | orchestrator | 2026-02-04 00:55:14.743496 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.743503 | orchestrator | Wednesday 04 February 2026 00:45:25 +0000 (0:00:00.583) 0:00:57.789 **** 2026-02-04 00:55:14.743509 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.743516 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.743523 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.743530 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743536 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743543 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743550 | orchestrator | 2026-02-04 00:55:14.743556 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.743569 | orchestrator | Wednesday 04 February 2026 00:45:26 +0000 (0:00:00.764) 0:00:58.554 **** 2026-02-04 00:55:14.743576 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743582 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743589 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743596 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.743603 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.743610 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.743617 | orchestrator | 2026-02-04 00:55:14.743623 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.743630 | orchestrator | Wednesday 04 February 2026 00:45:28 +0000 (0:00:01.385) 0:00:59.939 **** 2026-02-04 00:55:14.743637 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743644 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743650 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743657 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.743663 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.743670 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.743677 | orchestrator | 2026-02-04 00:55:14.743683 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.743690 | orchestrator | Wednesday 04 February 2026 00:45:29 +0000 (0:00:01.413) 0:01:01.352 **** 2026-02-04 00:55:14.743697 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.743704 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.743711 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.743718 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743725 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743735 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743742 | orchestrator | 2026-02-04 00:55:14.743749 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.743755 | orchestrator | Wednesday 04 February 2026 00:45:31 +0000 (0:00:01.546) 0:01:02.899 **** 2026-02-04 00:55:14.743762 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.743769 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.743775 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.743782 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.743789 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.743795 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.743802 | orchestrator | 2026-02-04 00:55:14.743809 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.743816 | orchestrator | Wednesday 04 February 2026 00:45:32 +0000 (0:00:01.039) 0:01:03.938 **** 2026-02-04 00:55:14.743822 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743829 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743836 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743842 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743849 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743856 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743863 | orchestrator | 2026-02-04 00:55:14.743869 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.743875 | orchestrator | Wednesday 04 February 2026 00:45:33 +0000 (0:00:00.936) 0:01:04.875 **** 2026-02-04 00:55:14.743880 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743887 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743894 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743900 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743907 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743914 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743921 | orchestrator | 2026-02-04 00:55:14.743928 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.743935 | orchestrator | Wednesday 04 February 2026 00:45:34 +0000 (0:00:01.044) 0:01:05.919 **** 2026-02-04 00:55:14.743942 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.743948 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.743955 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.743967 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.743974 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.743981 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.743987 | orchestrator | 2026-02-04 00:55:14.743994 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.744001 | orchestrator | Wednesday 04 February 2026 00:45:35 +0000 (0:00:00.928) 0:01:06.848 **** 2026-02-04 00:55:14.744007 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744014 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744021 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744027 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744034 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744040 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744047 | orchestrator | 2026-02-04 00:55:14.744054 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.744060 | orchestrator | Wednesday 04 February 2026 00:45:36 +0000 (0:00:01.388) 0:01:08.237 **** 2026-02-04 00:55:14.744066 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744071 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744077 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744083 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744093 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744099 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744104 | orchestrator | 2026-02-04 00:55:14.744111 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.744116 | orchestrator | Wednesday 04 February 2026 00:45:37 +0000 (0:00:00.979) 0:01:09.217 **** 2026-02-04 00:55:14.744139 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744146 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744152 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744158 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.744164 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.744169 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.744175 | orchestrator | 2026-02-04 00:55:14.744181 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.744186 | orchestrator | Wednesday 04 February 2026 00:45:38 +0000 (0:00:01.060) 0:01:10.278 **** 2026-02-04 00:55:14.744193 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.744198 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.744204 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.744209 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.744215 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.744221 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.744227 | orchestrator | 2026-02-04 00:55:14.744232 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.744238 | orchestrator | Wednesday 04 February 2026 00:45:39 +0000 (0:00:00.580) 0:01:10.858 **** 2026-02-04 00:55:14.744243 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.744248 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.744254 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.744260 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.744265 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.744271 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.744277 | orchestrator | 2026-02-04 00:55:14.744283 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-04 00:55:14.744288 | orchestrator | Wednesday 04 February 2026 00:45:40 +0000 (0:00:01.257) 0:01:12.115 **** 2026-02-04 00:55:14.744294 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.744299 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.744305 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.744311 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.744317 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.744323 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.744329 | orchestrator | 2026-02-04 00:55:14.744335 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-04 00:55:14.744348 | orchestrator | Wednesday 04 February 2026 00:45:41 +0000 (0:00:01.498) 0:01:13.614 **** 2026-02-04 00:55:14.744354 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.744360 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.744366 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.744372 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.744388 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.744394 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.744400 | orchestrator | 2026-02-04 00:55:14.744406 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-04 00:55:14.744413 | orchestrator | Wednesday 04 February 2026 00:45:44 +0000 (0:00:02.764) 0:01:16.378 **** 2026-02-04 00:55:14.744420 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.744428 | orchestrator | 2026-02-04 00:55:14.744435 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-04 00:55:14.744441 | orchestrator | Wednesday 04 February 2026 00:45:45 +0000 (0:00:01.325) 0:01:17.704 **** 2026-02-04 00:55:14.744448 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744455 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744461 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744468 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744474 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744481 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744488 | orchestrator | 2026-02-04 00:55:14.744494 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-04 00:55:14.744501 | orchestrator | Wednesday 04 February 2026 00:45:46 +0000 (0:00:00.687) 0:01:18.391 **** 2026-02-04 00:55:14.744507 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744513 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744519 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744525 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744531 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744538 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744545 | orchestrator | 2026-02-04 00:55:14.744551 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-04 00:55:14.744558 | orchestrator | Wednesday 04 February 2026 00:45:47 +0000 (0:00:00.791) 0:01:19.182 **** 2026-02-04 00:55:14.744565 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 00:55:14.744572 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 00:55:14.744578 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 00:55:14.744585 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 00:55:14.744592 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 00:55:14.744598 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 00:55:14.744606 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 00:55:14.744613 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 00:55:14.744619 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 00:55:14.744626 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 00:55:14.744642 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 00:55:14.744649 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 00:55:14.744656 | orchestrator | 2026-02-04 00:55:14.744663 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-04 00:55:14.744679 | orchestrator | Wednesday 04 February 2026 00:45:48 +0000 (0:00:01.342) 0:01:20.524 **** 2026-02-04 00:55:14.744685 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.744692 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.744698 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.744706 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.744712 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.744719 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.744725 | orchestrator | 2026-02-04 00:55:14.744732 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-04 00:55:14.744739 | orchestrator | Wednesday 04 February 2026 00:45:50 +0000 (0:00:01.450) 0:01:21.975 **** 2026-02-04 00:55:14.744746 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744752 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744759 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744766 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744773 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744779 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744786 | orchestrator | 2026-02-04 00:55:14.744792 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-04 00:55:14.744799 | orchestrator | Wednesday 04 February 2026 00:45:50 +0000 (0:00:00.605) 0:01:22.580 **** 2026-02-04 00:55:14.744805 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744812 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744819 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744825 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744832 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744838 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744845 | orchestrator | 2026-02-04 00:55:14.744852 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-04 00:55:14.744859 | orchestrator | Wednesday 04 February 2026 00:45:51 +0000 (0:00:00.940) 0:01:23.521 **** 2026-02-04 00:55:14.744866 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.744872 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.744879 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.744886 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.744892 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.744898 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.744905 | orchestrator | 2026-02-04 00:55:14.744912 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-04 00:55:14.744923 | orchestrator | Wednesday 04 February 2026 00:45:52 +0000 (0:00:00.791) 0:01:24.312 **** 2026-02-04 00:55:14.744930 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.744937 | orchestrator | 2026-02-04 00:55:14.744943 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-04 00:55:14.744950 | orchestrator | Wednesday 04 February 2026 00:45:53 +0000 (0:00:01.326) 0:01:25.639 **** 2026-02-04 00:55:14.744957 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.744963 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.744970 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.744977 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.744983 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.744990 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.744996 | orchestrator | 2026-02-04 00:55:14.745003 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-04 00:55:14.745010 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:50.227) 0:02:15.866 **** 2026-02-04 00:55:14.745016 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 00:55:14.745023 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 00:55:14.745030 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 00:55:14.745041 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 00:55:14.745048 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 00:55:14.745055 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 00:55:14.745062 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745068 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 00:55:14.745075 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 00:55:14.745082 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 00:55:14.745088 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745095 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 00:55:14.745102 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 00:55:14.745109 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 00:55:14.745115 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745161 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 00:55:14.745168 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 00:55:14.745174 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 00:55:14.745181 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745188 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745199 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 00:55:14.745206 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 00:55:14.745213 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 00:55:14.745219 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745226 | orchestrator | 2026-02-04 00:55:14.745233 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-04 00:55:14.745240 | orchestrator | Wednesday 04 February 2026 00:46:44 +0000 (0:00:00.821) 0:02:16.688 **** 2026-02-04 00:55:14.745246 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745253 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745260 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745266 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745280 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745286 | orchestrator | 2026-02-04 00:55:14.745293 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-04 00:55:14.745300 | orchestrator | Wednesday 04 February 2026 00:46:45 +0000 (0:00:01.062) 0:02:17.750 **** 2026-02-04 00:55:14.745306 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745313 | orchestrator | 2026-02-04 00:55:14.745320 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-04 00:55:14.745327 | orchestrator | Wednesday 04 February 2026 00:46:46 +0000 (0:00:00.160) 0:02:17.911 **** 2026-02-04 00:55:14.745334 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745340 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745347 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745354 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745361 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745367 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745374 | orchestrator | 2026-02-04 00:55:14.745381 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-04 00:55:14.745387 | orchestrator | Wednesday 04 February 2026 00:46:46 +0000 (0:00:00.668) 0:02:18.580 **** 2026-02-04 00:55:14.745394 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745406 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745413 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745419 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745426 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745433 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745440 | orchestrator | 2026-02-04 00:55:14.745446 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-04 00:55:14.745453 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.645) 0:02:19.226 **** 2026-02-04 00:55:14.745460 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745467 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745477 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745484 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745491 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745497 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745504 | orchestrator | 2026-02-04 00:55:14.745511 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-04 00:55:14.745517 | orchestrator | Wednesday 04 February 2026 00:46:47 +0000 (0:00:00.525) 0:02:19.751 **** 2026-02-04 00:55:14.745524 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.745531 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.745537 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.745544 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.745551 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.745557 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.745564 | orchestrator | 2026-02-04 00:55:14.745570 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-04 00:55:14.745578 | orchestrator | Wednesday 04 February 2026 00:46:50 +0000 (0:00:02.445) 0:02:22.196 **** 2026-02-04 00:55:14.745584 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.745591 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.745597 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.745604 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.745611 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.745617 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.745624 | orchestrator | 2026-02-04 00:55:14.745631 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-04 00:55:14.745637 | orchestrator | Wednesday 04 February 2026 00:46:51 +0000 (0:00:00.634) 0:02:22.831 **** 2026-02-04 00:55:14.745645 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.745653 | orchestrator | 2026-02-04 00:55:14.745659 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-04 00:55:14.745666 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:01.006) 0:02:23.837 **** 2026-02-04 00:55:14.745673 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745679 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745686 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745693 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745699 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745706 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745713 | orchestrator | 2026-02-04 00:55:14.745720 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-04 00:55:14.745726 | orchestrator | Wednesday 04 February 2026 00:46:52 +0000 (0:00:00.795) 0:02:24.633 **** 2026-02-04 00:55:14.745733 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745740 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745747 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745754 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745760 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745767 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745774 | orchestrator | 2026-02-04 00:55:14.745781 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-04 00:55:14.745793 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.473) 0:02:25.106 **** 2026-02-04 00:55:14.745799 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745806 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745816 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745823 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745829 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745836 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745843 | orchestrator | 2026-02-04 00:55:14.745850 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-04 00:55:14.745856 | orchestrator | Wednesday 04 February 2026 00:46:53 +0000 (0:00:00.683) 0:02:25.789 **** 2026-02-04 00:55:14.745863 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745870 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745877 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745883 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745890 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745896 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745903 | orchestrator | 2026-02-04 00:55:14.745910 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-04 00:55:14.745916 | orchestrator | Wednesday 04 February 2026 00:46:54 +0000 (0:00:00.546) 0:02:26.336 **** 2026-02-04 00:55:14.745923 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745930 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745936 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.745943 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.745950 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.745957 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.745963 | orchestrator | 2026-02-04 00:55:14.745970 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-04 00:55:14.745977 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:00.812) 0:02:27.149 **** 2026-02-04 00:55:14.745983 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.745990 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.745997 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.746003 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.746010 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.746060 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.746067 | orchestrator | 2026-02-04 00:55:14.746073 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-04 00:55:14.746079 | orchestrator | Wednesday 04 February 2026 00:46:55 +0000 (0:00:00.579) 0:02:27.729 **** 2026-02-04 00:55:14.746085 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.746092 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.746098 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.746103 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.746108 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.746114 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.746134 | orchestrator | 2026-02-04 00:55:14.746141 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-04 00:55:14.746147 | orchestrator | Wednesday 04 February 2026 00:46:56 +0000 (0:00:00.766) 0:02:28.496 **** 2026-02-04 00:55:14.746157 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.746163 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.746169 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.746175 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.746181 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.746188 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.746194 | orchestrator | 2026-02-04 00:55:14.746199 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-04 00:55:14.746206 | orchestrator | Wednesday 04 February 2026 00:46:57 +0000 (0:00:00.492) 0:02:28.988 **** 2026-02-04 00:55:14.746211 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.746224 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.746230 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.746237 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.746244 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.746251 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.746258 | orchestrator | 2026-02-04 00:55:14.746265 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-04 00:55:14.746272 | orchestrator | Wednesday 04 February 2026 00:46:58 +0000 (0:00:01.092) 0:02:30.081 **** 2026-02-04 00:55:14.746279 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.746286 | orchestrator | 2026-02-04 00:55:14.746293 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-04 00:55:14.746300 | orchestrator | Wednesday 04 February 2026 00:46:59 +0000 (0:00:01.099) 0:02:31.180 **** 2026-02-04 00:55:14.746307 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-04 00:55:14.746315 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-04 00:55:14.746322 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-04 00:55:14.746329 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-04 00:55:14.746335 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-04 00:55:14.746342 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-04 00:55:14.746348 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-04 00:55:14.746354 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-04 00:55:14.746361 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-04 00:55:14.746366 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-04 00:55:14.746373 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-04 00:55:14.746380 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-04 00:55:14.746387 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-04 00:55:14.746394 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-04 00:55:14.746401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-04 00:55:14.746408 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-04 00:55:14.746414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-04 00:55:14.746420 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-04 00:55:14.746441 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-04 00:55:14.746447 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-04 00:55:14.746453 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-04 00:55:14.746459 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-04 00:55:14.746465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-04 00:55:14.746471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-04 00:55:14.746479 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-04 00:55:14.746485 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-04 00:55:14.746492 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-04 00:55:14.746500 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-04 00:55:14.746507 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-04 00:55:14.746513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-04 00:55:14.746521 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-04 00:55:14.746528 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-04 00:55:14.746535 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-04 00:55:14.746542 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-04 00:55:14.746555 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-04 00:55:14.746562 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-04 00:55:14.746569 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-04 00:55:14.746576 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-04 00:55:14.746584 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-04 00:55:14.746591 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-04 00:55:14.746598 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-04 00:55:14.746605 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-04 00:55:14.746612 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-04 00:55:14.746619 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 00:55:14.746626 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-04 00:55:14.746633 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-04 00:55:14.746644 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-04 00:55:14.746651 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-04 00:55:14.746657 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 00:55:14.746663 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 00:55:14.746670 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-04 00:55:14.746676 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 00:55:14.746683 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 00:55:14.746689 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 00:55:14.746696 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 00:55:14.746703 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 00:55:14.746709 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 00:55:14.746716 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 00:55:14.746723 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 00:55:14.746729 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 00:55:14.746736 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 00:55:14.746743 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 00:55:14.746750 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 00:55:14.746756 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 00:55:14.746763 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 00:55:14.746770 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 00:55:14.746777 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 00:55:14.746783 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 00:55:14.746790 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 00:55:14.746796 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 00:55:14.746803 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 00:55:14.746810 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 00:55:14.746817 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 00:55:14.746823 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 00:55:14.746835 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 00:55:14.746842 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 00:55:14.746854 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 00:55:14.746861 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 00:55:14.746868 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-04 00:55:14.746875 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 00:55:14.746881 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 00:55:14.746888 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 00:55:14.746894 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 00:55:14.746901 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-04 00:55:14.746909 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-04 00:55:14.746916 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 00:55:14.746923 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-04 00:55:14.746929 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-04 00:55:14.746936 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 00:55:14.746942 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-04 00:55:14.746949 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-04 00:55:14.746956 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-04 00:55:14.746962 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-04 00:55:14.746969 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-04 00:55:14.746975 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-04 00:55:14.746982 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-04 00:55:14.746988 | orchestrator | 2026-02-04 00:55:14.746995 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-04 00:55:14.747001 | orchestrator | Wednesday 04 February 2026 00:47:06 +0000 (0:00:06.784) 0:02:37.964 **** 2026-02-04 00:55:14.747008 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747015 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747021 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747029 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.747036 | orchestrator | 2026-02-04 00:55:14.747042 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-04 00:55:14.747053 | orchestrator | Wednesday 04 February 2026 00:47:07 +0000 (0:00:01.034) 0:02:38.999 **** 2026-02-04 00:55:14.747060 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747068 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747074 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747081 | orchestrator | 2026-02-04 00:55:14.747087 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-04 00:55:14.747094 | orchestrator | Wednesday 04 February 2026 00:47:08 +0000 (0:00:01.259) 0:02:40.258 **** 2026-02-04 00:55:14.747101 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747107 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747117 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747140 | orchestrator | 2026-02-04 00:55:14.747146 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-04 00:55:14.747152 | orchestrator | Wednesday 04 February 2026 00:47:09 +0000 (0:00:01.381) 0:02:41.640 **** 2026-02-04 00:55:14.747158 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.747163 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.747169 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.747175 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747181 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747188 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747193 | orchestrator | 2026-02-04 00:55:14.747199 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-04 00:55:14.747204 | orchestrator | Wednesday 04 February 2026 00:47:10 +0000 (0:00:00.608) 0:02:42.249 **** 2026-02-04 00:55:14.747211 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.747217 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.747223 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.747229 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747235 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747240 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747246 | orchestrator | 2026-02-04 00:55:14.747252 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-04 00:55:14.747257 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.679) 0:02:42.928 **** 2026-02-04 00:55:14.747263 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747269 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747275 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747280 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747286 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747292 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747298 | orchestrator | 2026-02-04 00:55:14.747310 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-04 00:55:14.747317 | orchestrator | Wednesday 04 February 2026 00:47:11 +0000 (0:00:00.591) 0:02:43.520 **** 2026-02-04 00:55:14.747323 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747328 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747334 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747341 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747347 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747353 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747360 | orchestrator | 2026-02-04 00:55:14.747365 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-04 00:55:14.747371 | orchestrator | Wednesday 04 February 2026 00:47:12 +0000 (0:00:00.809) 0:02:44.329 **** 2026-02-04 00:55:14.747377 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747384 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747390 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747396 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747402 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747408 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747414 | orchestrator | 2026-02-04 00:55:14.747420 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-04 00:55:14.747427 | orchestrator | Wednesday 04 February 2026 00:47:13 +0000 (0:00:00.657) 0:02:44.987 **** 2026-02-04 00:55:14.747434 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747440 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747447 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747454 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747460 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747466 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747479 | orchestrator | 2026-02-04 00:55:14.747484 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-04 00:55:14.747488 | orchestrator | Wednesday 04 February 2026 00:47:14 +0000 (0:00:01.088) 0:02:46.075 **** 2026-02-04 00:55:14.747492 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747496 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747500 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747504 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747507 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747511 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747515 | orchestrator | 2026-02-04 00:55:14.747519 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-04 00:55:14.747524 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.796) 0:02:46.872 **** 2026-02-04 00:55:14.747528 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747531 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747535 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747548 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747552 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747556 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747560 | orchestrator | 2026-02-04 00:55:14.747564 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-04 00:55:14.747567 | orchestrator | Wednesday 04 February 2026 00:47:15 +0000 (0:00:00.624) 0:02:47.496 **** 2026-02-04 00:55:14.747572 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747575 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747579 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747583 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.747587 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.747591 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.747595 | orchestrator | 2026-02-04 00:55:14.747599 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-04 00:55:14.747603 | orchestrator | Wednesday 04 February 2026 00:47:19 +0000 (0:00:03.411) 0:02:50.907 **** 2026-02-04 00:55:14.747607 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.747611 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.747615 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.747618 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747623 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747627 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747631 | orchestrator | 2026-02-04 00:55:14.747635 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-04 00:55:14.747638 | orchestrator | Wednesday 04 February 2026 00:47:19 +0000 (0:00:00.726) 0:02:51.634 **** 2026-02-04 00:55:14.747642 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.747646 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.747650 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.747654 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747658 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747661 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747665 | orchestrator | 2026-02-04 00:55:14.747669 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-04 00:55:14.747673 | orchestrator | Wednesday 04 February 2026 00:47:20 +0000 (0:00:00.932) 0:02:52.567 **** 2026-02-04 00:55:14.747677 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747681 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747685 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747688 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747692 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747696 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747700 | orchestrator | 2026-02-04 00:55:14.747704 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-04 00:55:14.747708 | orchestrator | Wednesday 04 February 2026 00:47:21 +0000 (0:00:00.727) 0:02:53.295 **** 2026-02-04 00:55:14.747715 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747719 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747723 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.747727 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747736 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747740 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747744 | orchestrator | 2026-02-04 00:55:14.747749 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-04 00:55:14.747752 | orchestrator | Wednesday 04 February 2026 00:47:22 +0000 (0:00:00.533) 0:02:53.829 **** 2026-02-04 00:55:14.747759 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-04 00:55:14.747766 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-04 00:55:14.747771 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747775 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-04 00:55:14.747779 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-04 00:55:14.747783 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747789 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-04 00:55:14.747794 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-04 00:55:14.747797 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747801 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747805 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747809 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747813 | orchestrator | 2026-02-04 00:55:14.747817 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-04 00:55:14.747821 | orchestrator | Wednesday 04 February 2026 00:47:22 +0000 (0:00:00.791) 0:02:54.620 **** 2026-02-04 00:55:14.747825 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747831 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747838 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747843 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747850 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747860 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747865 | orchestrator | 2026-02-04 00:55:14.747871 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-04 00:55:14.747877 | orchestrator | Wednesday 04 February 2026 00:47:23 +0000 (0:00:00.647) 0:02:55.267 **** 2026-02-04 00:55:14.747883 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747889 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747895 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747901 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747907 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747919 | orchestrator | 2026-02-04 00:55:14.747925 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 00:55:14.747931 | orchestrator | Wednesday 04 February 2026 00:47:24 +0000 (0:00:00.695) 0:02:55.963 **** 2026-02-04 00:55:14.747938 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.747944 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.747951 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.747957 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.747963 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.747969 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.747976 | orchestrator | 2026-02-04 00:55:14.747982 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 00:55:14.747987 | orchestrator | Wednesday 04 February 2026 00:47:24 +0000 (0:00:00.707) 0:02:56.670 **** 2026-02-04 00:55:14.747993 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748000 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.748006 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.748012 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.748018 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748025 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.748031 | orchestrator | 2026-02-04 00:55:14.748038 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 00:55:14.748051 | orchestrator | Wednesday 04 February 2026 00:47:25 +0000 (0:00:00.957) 0:02:57.628 **** 2026-02-04 00:55:14.748057 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748063 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.748069 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.748076 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748082 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.748089 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.748095 | orchestrator | 2026-02-04 00:55:14.748101 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 00:55:14.748107 | orchestrator | Wednesday 04 February 2026 00:47:26 +0000 (0:00:00.709) 0:02:58.338 **** 2026-02-04 00:55:14.748114 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.748164 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.748170 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.748173 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748177 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.748181 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.748185 | orchestrator | 2026-02-04 00:55:14.748189 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 00:55:14.748193 | orchestrator | Wednesday 04 February 2026 00:47:27 +0000 (0:00:01.464) 0:02:59.802 **** 2026-02-04 00:55:14.748197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.748201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.748205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.748209 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748213 | orchestrator | 2026-02-04 00:55:14.748217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 00:55:14.748227 | orchestrator | Wednesday 04 February 2026 00:47:28 +0000 (0:00:00.344) 0:03:00.146 **** 2026-02-04 00:55:14.748231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.748235 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.748239 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.748242 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748246 | orchestrator | 2026-02-04 00:55:14.748250 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 00:55:14.748254 | orchestrator | Wednesday 04 February 2026 00:47:28 +0000 (0:00:00.360) 0:03:00.506 **** 2026-02-04 00:55:14.748258 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.748262 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.748266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.748270 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748273 | orchestrator | 2026-02-04 00:55:14.748281 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 00:55:14.748286 | orchestrator | Wednesday 04 February 2026 00:47:29 +0000 (0:00:00.471) 0:03:00.978 **** 2026-02-04 00:55:14.748290 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.748294 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.748297 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.748301 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748305 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.748309 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.748313 | orchestrator | 2026-02-04 00:55:14.748317 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 00:55:14.748321 | orchestrator | Wednesday 04 February 2026 00:47:29 +0000 (0:00:00.769) 0:03:01.748 **** 2026-02-04 00:55:14.748324 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 00:55:14.748328 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 00:55:14.748332 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-04 00:55:14.748337 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 00:55:14.748340 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748344 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-04 00:55:14.748348 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.748352 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-04 00:55:14.748355 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.748359 | orchestrator | 2026-02-04 00:55:14.748363 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-04 00:55:14.748367 | orchestrator | Wednesday 04 February 2026 00:47:32 +0000 (0:00:02.103) 0:03:03.852 **** 2026-02-04 00:55:14.748371 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.748375 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.748379 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.748382 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.748386 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.748390 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.748394 | orchestrator | 2026-02-04 00:55:14.748398 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 00:55:14.748401 | orchestrator | Wednesday 04 February 2026 00:47:34 +0000 (0:00:02.495) 0:03:06.347 **** 2026-02-04 00:55:14.748405 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.748409 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.748413 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.748416 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.748420 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.748424 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.748428 | orchestrator | 2026-02-04 00:55:14.748432 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-04 00:55:14.748436 | orchestrator | Wednesday 04 February 2026 00:47:35 +0000 (0:00:01.054) 0:03:07.402 **** 2026-02-04 00:55:14.748444 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748447 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.748451 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.748456 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.748460 | orchestrator | 2026-02-04 00:55:14.748464 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-04 00:55:14.748473 | orchestrator | Wednesday 04 February 2026 00:47:36 +0000 (0:00:00.852) 0:03:08.254 **** 2026-02-04 00:55:14.748477 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.748481 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.748485 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.748489 | orchestrator | 2026-02-04 00:55:14.748493 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-04 00:55:14.748497 | orchestrator | Wednesday 04 February 2026 00:47:36 +0000 (0:00:00.255) 0:03:08.510 **** 2026-02-04 00:55:14.748501 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.748504 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.748508 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.748512 | orchestrator | 2026-02-04 00:55:14.748516 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-04 00:55:14.748520 | orchestrator | Wednesday 04 February 2026 00:47:37 +0000 (0:00:01.244) 0:03:09.754 **** 2026-02-04 00:55:14.748524 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:55:14.748528 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:55:14.748531 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:55:14.748535 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748539 | orchestrator | 2026-02-04 00:55:14.748544 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-04 00:55:14.748551 | orchestrator | Wednesday 04 February 2026 00:47:38 +0000 (0:00:00.847) 0:03:10.602 **** 2026-02-04 00:55:14.748557 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.748562 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.748568 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.748573 | orchestrator | 2026-02-04 00:55:14.748579 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-04 00:55:14.748585 | orchestrator | Wednesday 04 February 2026 00:47:39 +0000 (0:00:00.380) 0:03:10.983 **** 2026-02-04 00:55:14.748591 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.748597 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.748604 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.748610 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.748616 | orchestrator | 2026-02-04 00:55:14.748622 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-04 00:55:14.748627 | orchestrator | Wednesday 04 February 2026 00:47:40 +0000 (0:00:00.968) 0:03:11.951 **** 2026-02-04 00:55:14.748633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.748638 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.748644 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.748652 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748657 | orchestrator | 2026-02-04 00:55:14.748663 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-04 00:55:14.748669 | orchestrator | Wednesday 04 February 2026 00:47:40 +0000 (0:00:00.305) 0:03:12.257 **** 2026-02-04 00:55:14.748676 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748682 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.748688 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.748694 | orchestrator | 2026-02-04 00:55:14.748700 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-04 00:55:14.748711 | orchestrator | Wednesday 04 February 2026 00:47:40 +0000 (0:00:00.290) 0:03:12.547 **** 2026-02-04 00:55:14.748718 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748724 | orchestrator | 2026-02-04 00:55:14.748730 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-04 00:55:14.748736 | orchestrator | Wednesday 04 February 2026 00:47:40 +0000 (0:00:00.192) 0:03:12.740 **** 2026-02-04 00:55:14.748743 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748748 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.748752 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.748756 | orchestrator | 2026-02-04 00:55:14.748760 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-04 00:55:14.748764 | orchestrator | Wednesday 04 February 2026 00:47:41 +0000 (0:00:00.281) 0:03:13.022 **** 2026-02-04 00:55:14.748768 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748772 | orchestrator | 2026-02-04 00:55:14.748776 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-04 00:55:14.748780 | orchestrator | Wednesday 04 February 2026 00:47:41 +0000 (0:00:00.175) 0:03:13.198 **** 2026-02-04 00:55:14.748784 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748788 | orchestrator | 2026-02-04 00:55:14.748792 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-04 00:55:14.748796 | orchestrator | Wednesday 04 February 2026 00:47:41 +0000 (0:00:00.197) 0:03:13.395 **** 2026-02-04 00:55:14.748800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748803 | orchestrator | 2026-02-04 00:55:14.748807 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-04 00:55:14.748836 | orchestrator | Wednesday 04 February 2026 00:47:41 +0000 (0:00:00.107) 0:03:13.503 **** 2026-02-04 00:55:14.748842 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748848 | orchestrator | 2026-02-04 00:55:14.748855 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-04 00:55:14.748861 | orchestrator | Wednesday 04 February 2026 00:47:42 +0000 (0:00:00.525) 0:03:14.029 **** 2026-02-04 00:55:14.748867 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748874 | orchestrator | 2026-02-04 00:55:14.748881 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-04 00:55:14.748887 | orchestrator | Wednesday 04 February 2026 00:47:42 +0000 (0:00:00.199) 0:03:14.229 **** 2026-02-04 00:55:14.748893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.748899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.748906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.748912 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748918 | orchestrator | 2026-02-04 00:55:14.748924 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-04 00:55:14.748936 | orchestrator | Wednesday 04 February 2026 00:47:42 +0000 (0:00:00.383) 0:03:14.613 **** 2026-02-04 00:55:14.748943 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748949 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.748956 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.748962 | orchestrator | 2026-02-04 00:55:14.748968 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-04 00:55:14.748975 | orchestrator | Wednesday 04 February 2026 00:47:43 +0000 (0:00:00.326) 0:03:14.939 **** 2026-02-04 00:55:14.748980 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.748986 | orchestrator | 2026-02-04 00:55:14.748994 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-04 00:55:14.749000 | orchestrator | Wednesday 04 February 2026 00:47:43 +0000 (0:00:00.176) 0:03:15.116 **** 2026-02-04 00:55:14.749006 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.749013 | orchestrator | 2026-02-04 00:55:14.749019 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-04 00:55:14.749026 | orchestrator | Wednesday 04 February 2026 00:47:43 +0000 (0:00:00.155) 0:03:15.271 **** 2026-02-04 00:55:14.749037 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749044 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749050 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.749063 | orchestrator | 2026-02-04 00:55:14.749069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-04 00:55:14.749075 | orchestrator | Wednesday 04 February 2026 00:47:44 +0000 (0:00:00.935) 0:03:16.207 **** 2026-02-04 00:55:14.749081 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.749088 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.749094 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.749100 | orchestrator | 2026-02-04 00:55:14.749107 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-04 00:55:14.749114 | orchestrator | Wednesday 04 February 2026 00:47:44 +0000 (0:00:00.321) 0:03:16.528 **** 2026-02-04 00:55:14.749133 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.749140 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.749146 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.749152 | orchestrator | 2026-02-04 00:55:14.749158 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-04 00:55:14.749164 | orchestrator | Wednesday 04 February 2026 00:47:46 +0000 (0:00:01.530) 0:03:18.058 **** 2026-02-04 00:55:14.749170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.749177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.749183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.749194 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.749200 | orchestrator | 2026-02-04 00:55:14.749205 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-04 00:55:14.749211 | orchestrator | Wednesday 04 February 2026 00:47:46 +0000 (0:00:00.704) 0:03:18.762 **** 2026-02-04 00:55:14.749216 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.749223 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.749229 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.749235 | orchestrator | 2026-02-04 00:55:14.749241 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-04 00:55:14.749246 | orchestrator | Wednesday 04 February 2026 00:47:47 +0000 (0:00:00.412) 0:03:19.175 **** 2026-02-04 00:55:14.749252 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749257 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749263 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749269 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.749274 | orchestrator | 2026-02-04 00:55:14.749280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-04 00:55:14.749286 | orchestrator | Wednesday 04 February 2026 00:47:48 +0000 (0:00:00.718) 0:03:19.893 **** 2026-02-04 00:55:14.749292 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.749299 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.749305 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.749311 | orchestrator | 2026-02-04 00:55:14.749318 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-04 00:55:14.749324 | orchestrator | Wednesday 04 February 2026 00:47:48 +0000 (0:00:00.389) 0:03:20.282 **** 2026-02-04 00:55:14.749330 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.749335 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.749341 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.749347 | orchestrator | 2026-02-04 00:55:14.749352 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-04 00:55:14.749359 | orchestrator | Wednesday 04 February 2026 00:47:49 +0000 (0:00:01.113) 0:03:21.395 **** 2026-02-04 00:55:14.749365 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.749379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.749386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.749392 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.749398 | orchestrator | 2026-02-04 00:55:14.749404 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-04 00:55:14.749410 | orchestrator | Wednesday 04 February 2026 00:47:50 +0000 (0:00:00.532) 0:03:21.928 **** 2026-02-04 00:55:14.749416 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.749421 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.749425 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.749429 | orchestrator | 2026-02-04 00:55:14.749433 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-04 00:55:14.749436 | orchestrator | Wednesday 04 February 2026 00:47:50 +0000 (0:00:00.258) 0:03:22.186 **** 2026-02-04 00:55:14.749440 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.749444 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.749448 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.749452 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749456 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749466 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749470 | orchestrator | 2026-02-04 00:55:14.749474 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-04 00:55:14.749478 | orchestrator | Wednesday 04 February 2026 00:47:50 +0000 (0:00:00.635) 0:03:22.822 **** 2026-02-04 00:55:14.749482 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.749486 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.749490 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.749494 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.749498 | orchestrator | 2026-02-04 00:55:14.749501 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-04 00:55:14.749505 | orchestrator | Wednesday 04 February 2026 00:47:51 +0000 (0:00:00.725) 0:03:23.547 **** 2026-02-04 00:55:14.749509 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749513 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749517 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749521 | orchestrator | 2026-02-04 00:55:14.749524 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-04 00:55:14.749528 | orchestrator | Wednesday 04 February 2026 00:47:52 +0000 (0:00:00.410) 0:03:23.957 **** 2026-02-04 00:55:14.749532 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.749536 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.749540 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.749543 | orchestrator | 2026-02-04 00:55:14.749547 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-04 00:55:14.749551 | orchestrator | Wednesday 04 February 2026 00:47:53 +0000 (0:00:01.315) 0:03:25.273 **** 2026-02-04 00:55:14.749555 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:55:14.749559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:55:14.749563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:55:14.749567 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749571 | orchestrator | 2026-02-04 00:55:14.749574 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-04 00:55:14.749578 | orchestrator | Wednesday 04 February 2026 00:47:53 +0000 (0:00:00.529) 0:03:25.802 **** 2026-02-04 00:55:14.749582 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749586 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749590 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749593 | orchestrator | 2026-02-04 00:55:14.749597 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-04 00:55:14.749601 | orchestrator | 2026-02-04 00:55:14.749605 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.749613 | orchestrator | Wednesday 04 February 2026 00:47:54 +0000 (0:00:00.531) 0:03:26.334 **** 2026-02-04 00:55:14.749620 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.749625 | orchestrator | 2026-02-04 00:55:14.749629 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.749633 | orchestrator | Wednesday 04 February 2026 00:47:55 +0000 (0:00:00.615) 0:03:26.950 **** 2026-02-04 00:55:14.749636 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.749640 | orchestrator | 2026-02-04 00:55:14.749644 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.749648 | orchestrator | Wednesday 04 February 2026 00:47:55 +0000 (0:00:00.496) 0:03:27.446 **** 2026-02-04 00:55:14.749652 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749656 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749659 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749663 | orchestrator | 2026-02-04 00:55:14.749667 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.749671 | orchestrator | Wednesday 04 February 2026 00:47:56 +0000 (0:00:00.978) 0:03:28.425 **** 2026-02-04 00:55:14.749675 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749679 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749682 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749686 | orchestrator | 2026-02-04 00:55:14.749690 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.749694 | orchestrator | Wednesday 04 February 2026 00:47:56 +0000 (0:00:00.241) 0:03:28.666 **** 2026-02-04 00:55:14.749698 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749702 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749706 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749709 | orchestrator | 2026-02-04 00:55:14.749713 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.749717 | orchestrator | Wednesday 04 February 2026 00:47:57 +0000 (0:00:00.224) 0:03:28.891 **** 2026-02-04 00:55:14.749721 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749725 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749728 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749732 | orchestrator | 2026-02-04 00:55:14.749736 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.749740 | orchestrator | Wednesday 04 February 2026 00:47:57 +0000 (0:00:00.236) 0:03:29.127 **** 2026-02-04 00:55:14.749744 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749748 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749752 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749755 | orchestrator | 2026-02-04 00:55:14.749759 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.749763 | orchestrator | Wednesday 04 February 2026 00:47:58 +0000 (0:00:00.958) 0:03:30.086 **** 2026-02-04 00:55:14.749767 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749771 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749775 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749779 | orchestrator | 2026-02-04 00:55:14.749782 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.749786 | orchestrator | Wednesday 04 February 2026 00:47:58 +0000 (0:00:00.272) 0:03:30.359 **** 2026-02-04 00:55:14.749793 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749797 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749801 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749805 | orchestrator | 2026-02-04 00:55:14.749808 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.749812 | orchestrator | Wednesday 04 February 2026 00:47:58 +0000 (0:00:00.260) 0:03:30.620 **** 2026-02-04 00:55:14.749819 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749823 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749827 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749831 | orchestrator | 2026-02-04 00:55:14.749835 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.749838 | orchestrator | Wednesday 04 February 2026 00:47:59 +0000 (0:00:00.728) 0:03:31.348 **** 2026-02-04 00:55:14.749842 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749846 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749850 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749854 | orchestrator | 2026-02-04 00:55:14.749858 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.749861 | orchestrator | Wednesday 04 February 2026 00:48:00 +0000 (0:00:01.019) 0:03:32.367 **** 2026-02-04 00:55:14.749865 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749873 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749877 | orchestrator | 2026-02-04 00:55:14.749881 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.749884 | orchestrator | Wednesday 04 February 2026 00:48:00 +0000 (0:00:00.276) 0:03:32.643 **** 2026-02-04 00:55:14.749888 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.749892 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.749896 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.749900 | orchestrator | 2026-02-04 00:55:14.749904 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.749907 | orchestrator | Wednesday 04 February 2026 00:48:01 +0000 (0:00:00.303) 0:03:32.947 **** 2026-02-04 00:55:14.749911 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749915 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749919 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749925 | orchestrator | 2026-02-04 00:55:14.749931 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.749937 | orchestrator | Wednesday 04 February 2026 00:48:01 +0000 (0:00:00.271) 0:03:33.218 **** 2026-02-04 00:55:14.749943 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749949 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749955 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.749961 | orchestrator | 2026-02-04 00:55:14.749966 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.749973 | orchestrator | Wednesday 04 February 2026 00:48:01 +0000 (0:00:00.265) 0:03:33.484 **** 2026-02-04 00:55:14.749986 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.749992 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.749998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.750005 | orchestrator | 2026-02-04 00:55:14.750115 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.750141 | orchestrator | Wednesday 04 February 2026 00:48:02 +0000 (0:00:00.450) 0:03:33.934 **** 2026-02-04 00:55:14.750145 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.750150 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.750153 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.750157 | orchestrator | 2026-02-04 00:55:14.750161 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.750165 | orchestrator | Wednesday 04 February 2026 00:48:02 +0000 (0:00:00.274) 0:03:34.209 **** 2026-02-04 00:55:14.750169 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.750173 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.750177 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.750181 | orchestrator | 2026-02-04 00:55:14.750185 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.750208 | orchestrator | Wednesday 04 February 2026 00:48:02 +0000 (0:00:00.265) 0:03:34.474 **** 2026-02-04 00:55:14.750212 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750223 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750228 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750231 | orchestrator | 2026-02-04 00:55:14.750235 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.750239 | orchestrator | Wednesday 04 February 2026 00:48:02 +0000 (0:00:00.261) 0:03:34.735 **** 2026-02-04 00:55:14.750243 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750247 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750251 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750263 | orchestrator | 2026-02-04 00:55:14.750269 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.750275 | orchestrator | Wednesday 04 February 2026 00:48:03 +0000 (0:00:00.472) 0:03:35.208 **** 2026-02-04 00:55:14.750281 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750287 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750294 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750300 | orchestrator | 2026-02-04 00:55:14.750307 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-04 00:55:14.750313 | orchestrator | Wednesday 04 February 2026 00:48:03 +0000 (0:00:00.498) 0:03:35.706 **** 2026-02-04 00:55:14.750317 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750321 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750324 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750328 | orchestrator | 2026-02-04 00:55:14.750332 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-04 00:55:14.750336 | orchestrator | Wednesday 04 February 2026 00:48:04 +0000 (0:00:00.281) 0:03:35.988 **** 2026-02-04 00:55:14.750340 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.750343 | orchestrator | 2026-02-04 00:55:14.750348 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-04 00:55:14.750354 | orchestrator | Wednesday 04 February 2026 00:48:04 +0000 (0:00:00.615) 0:03:36.603 **** 2026-02-04 00:55:14.750360 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.750365 | orchestrator | 2026-02-04 00:55:14.750402 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-04 00:55:14.750409 | orchestrator | Wednesday 04 February 2026 00:48:04 +0000 (0:00:00.125) 0:03:36.729 **** 2026-02-04 00:55:14.750416 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 00:55:14.750421 | orchestrator | 2026-02-04 00:55:14.750427 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-04 00:55:14.750434 | orchestrator | Wednesday 04 February 2026 00:48:05 +0000 (0:00:00.958) 0:03:37.687 **** 2026-02-04 00:55:14.750439 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750446 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750452 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750459 | orchestrator | 2026-02-04 00:55:14.750464 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-04 00:55:14.750470 | orchestrator | Wednesday 04 February 2026 00:48:06 +0000 (0:00:00.290) 0:03:37.978 **** 2026-02-04 00:55:14.750477 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750481 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750485 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750489 | orchestrator | 2026-02-04 00:55:14.750493 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-04 00:55:14.750497 | orchestrator | Wednesday 04 February 2026 00:48:06 +0000 (0:00:00.289) 0:03:38.267 **** 2026-02-04 00:55:14.750501 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750504 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750508 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750512 | orchestrator | 2026-02-04 00:55:14.750516 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-04 00:55:14.750520 | orchestrator | Wednesday 04 February 2026 00:48:08 +0000 (0:00:01.562) 0:03:39.830 **** 2026-02-04 00:55:14.750524 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750533 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750537 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750540 | orchestrator | 2026-02-04 00:55:14.750544 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-04 00:55:14.750548 | orchestrator | Wednesday 04 February 2026 00:48:08 +0000 (0:00:00.789) 0:03:40.620 **** 2026-02-04 00:55:14.750552 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750556 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750560 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750563 | orchestrator | 2026-02-04 00:55:14.750567 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-04 00:55:14.750571 | orchestrator | Wednesday 04 February 2026 00:48:09 +0000 (0:00:00.733) 0:03:41.353 **** 2026-02-04 00:55:14.750575 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750579 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750583 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750587 | orchestrator | 2026-02-04 00:55:14.750590 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-04 00:55:14.750599 | orchestrator | Wednesday 04 February 2026 00:48:10 +0000 (0:00:00.666) 0:03:42.020 **** 2026-02-04 00:55:14.750603 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750607 | orchestrator | 2026-02-04 00:55:14.750610 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-04 00:55:14.750614 | orchestrator | Wednesday 04 February 2026 00:48:11 +0000 (0:00:01.617) 0:03:43.637 **** 2026-02-04 00:55:14.750618 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750622 | orchestrator | 2026-02-04 00:55:14.750626 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-04 00:55:14.750629 | orchestrator | Wednesday 04 February 2026 00:48:13 +0000 (0:00:01.206) 0:03:44.844 **** 2026-02-04 00:55:14.750633 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:55:14.750637 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.750641 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.750645 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:55:14.750648 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:55:14.750652 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-04 00:55:14.750656 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:55:14.750660 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-04 00:55:14.750664 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:55:14.750668 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-04 00:55:14.750671 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-04 00:55:14.750675 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-04 00:55:14.750679 | orchestrator | 2026-02-04 00:55:14.750683 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-04 00:55:14.750687 | orchestrator | Wednesday 04 February 2026 00:48:16 +0000 (0:00:03.423) 0:03:48.267 **** 2026-02-04 00:55:14.750690 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750694 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750698 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750702 | orchestrator | 2026-02-04 00:55:14.750705 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-04 00:55:14.750709 | orchestrator | Wednesday 04 February 2026 00:48:17 +0000 (0:00:01.402) 0:03:49.670 **** 2026-02-04 00:55:14.750713 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750717 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750721 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750725 | orchestrator | 2026-02-04 00:55:14.750728 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-04 00:55:14.750732 | orchestrator | Wednesday 04 February 2026 00:48:18 +0000 (0:00:00.295) 0:03:49.965 **** 2026-02-04 00:55:14.750740 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.750743 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.750747 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.750751 | orchestrator | 2026-02-04 00:55:14.750755 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-04 00:55:14.750759 | orchestrator | Wednesday 04 February 2026 00:48:18 +0000 (0:00:00.438) 0:03:50.404 **** 2026-02-04 00:55:14.750763 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750783 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750788 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750792 | orchestrator | 2026-02-04 00:55:14.750796 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-04 00:55:14.750799 | orchestrator | Wednesday 04 February 2026 00:48:20 +0000 (0:00:01.602) 0:03:52.006 **** 2026-02-04 00:55:14.750803 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750807 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750811 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750815 | orchestrator | 2026-02-04 00:55:14.750819 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-04 00:55:14.750822 | orchestrator | Wednesday 04 February 2026 00:48:21 +0000 (0:00:01.457) 0:03:53.464 **** 2026-02-04 00:55:14.750826 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.750830 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.750834 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.750838 | orchestrator | 2026-02-04 00:55:14.750842 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-04 00:55:14.750845 | orchestrator | Wednesday 04 February 2026 00:48:21 +0000 (0:00:00.358) 0:03:53.822 **** 2026-02-04 00:55:14.750849 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.750853 | orchestrator | 2026-02-04 00:55:14.750857 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-04 00:55:14.750861 | orchestrator | Wednesday 04 February 2026 00:48:22 +0000 (0:00:00.636) 0:03:54.458 **** 2026-02-04 00:55:14.750865 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.750868 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.750872 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.750876 | orchestrator | 2026-02-04 00:55:14.750880 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-04 00:55:14.750884 | orchestrator | Wednesday 04 February 2026 00:48:22 +0000 (0:00:00.246) 0:03:54.705 **** 2026-02-04 00:55:14.750888 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.750892 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.750895 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.750899 | orchestrator | 2026-02-04 00:55:14.750903 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-04 00:55:14.750907 | orchestrator | Wednesday 04 February 2026 00:48:23 +0000 (0:00:00.299) 0:03:55.004 **** 2026-02-04 00:55:14.750911 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-2, testbed-node-1 2026-02-04 00:55:14.750915 | orchestrator | 2026-02-04 00:55:14.750919 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-04 00:55:14.750926 | orchestrator | Wednesday 04 February 2026 00:48:24 +0000 (0:00:01.120) 0:03:56.125 **** 2026-02-04 00:55:14.750930 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750934 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750938 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750942 | orchestrator | 2026-02-04 00:55:14.750946 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-04 00:55:14.750949 | orchestrator | Wednesday 04 February 2026 00:48:25 +0000 (0:00:01.438) 0:03:57.564 **** 2026-02-04 00:55:14.750953 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750957 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750964 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750968 | orchestrator | 2026-02-04 00:55:14.750972 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-04 00:55:14.750976 | orchestrator | Wednesday 04 February 2026 00:48:26 +0000 (0:00:01.026) 0:03:58.590 **** 2026-02-04 00:55:14.750980 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.750983 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.750987 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.750991 | orchestrator | 2026-02-04 00:55:14.750995 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-04 00:55:14.750999 | orchestrator | Wednesday 04 February 2026 00:48:28 +0000 (0:00:01.608) 0:04:00.199 **** 2026-02-04 00:55:14.751002 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.751006 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.751010 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.751014 | orchestrator | 2026-02-04 00:55:14.751018 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-04 00:55:14.751021 | orchestrator | Wednesday 04 February 2026 00:48:30 +0000 (0:00:02.232) 0:04:02.431 **** 2026-02-04 00:55:14.751025 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.751029 | orchestrator | 2026-02-04 00:55:14.751033 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-04 00:55:14.751037 | orchestrator | Wednesday 04 February 2026 00:48:31 +0000 (0:00:00.561) 0:04:02.993 **** 2026-02-04 00:55:14.751041 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-04 00:55:14.751044 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751048 | orchestrator | 2026-02-04 00:55:14.751052 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-04 00:55:14.751056 | orchestrator | Wednesday 04 February 2026 00:48:53 +0000 (0:00:22.055) 0:04:25.048 **** 2026-02-04 00:55:14.751060 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751063 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751067 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751071 | orchestrator | 2026-02-04 00:55:14.751077 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-04 00:55:14.751084 | orchestrator | Wednesday 04 February 2026 00:49:02 +0000 (0:00:09.658) 0:04:34.706 **** 2026-02-04 00:55:14.751090 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751096 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751102 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751108 | orchestrator | 2026-02-04 00:55:14.751113 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-04 00:55:14.751179 | orchestrator | Wednesday 04 February 2026 00:49:03 +0000 (0:00:00.352) 0:04:35.059 **** 2026-02-04 00:55:14.751192 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 00:55:14.751201 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 00:55:14.751209 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 00:55:14.751220 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 00:55:14.751229 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 00:55:14.751234 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__5ed1c2ad3acbf2d71040064d376ff82c50ef99d2'}])  2026-02-04 00:55:14.751242 | orchestrator | 2026-02-04 00:55:14.751249 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 00:55:14.751254 | orchestrator | Wednesday 04 February 2026 00:49:19 +0000 (0:00:16.437) 0:04:51.497 **** 2026-02-04 00:55:14.751260 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751266 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751272 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751279 | orchestrator | 2026-02-04 00:55:14.751285 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-04 00:55:14.751291 | orchestrator | Wednesday 04 February 2026 00:49:20 +0000 (0:00:00.374) 0:04:51.871 **** 2026-02-04 00:55:14.751297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.751303 | orchestrator | 2026-02-04 00:55:14.751309 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-04 00:55:14.751314 | orchestrator | Wednesday 04 February 2026 00:49:20 +0000 (0:00:00.814) 0:04:52.686 **** 2026-02-04 00:55:14.751318 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751322 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751326 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751329 | orchestrator | 2026-02-04 00:55:14.751333 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-04 00:55:14.751337 | orchestrator | Wednesday 04 February 2026 00:49:21 +0000 (0:00:00.319) 0:04:53.006 **** 2026-02-04 00:55:14.751341 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751345 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751349 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751352 | orchestrator | 2026-02-04 00:55:14.751356 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-04 00:55:14.751360 | orchestrator | Wednesday 04 February 2026 00:49:21 +0000 (0:00:00.328) 0:04:53.335 **** 2026-02-04 00:55:14.751364 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:55:14.751367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:55:14.751371 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:55:14.751375 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751379 | orchestrator | 2026-02-04 00:55:14.751383 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-04 00:55:14.751387 | orchestrator | Wednesday 04 February 2026 00:49:22 +0000 (0:00:00.812) 0:04:54.148 **** 2026-02-04 00:55:14.751390 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751394 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751417 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751422 | orchestrator | 2026-02-04 00:55:14.751426 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-04 00:55:14.751429 | orchestrator | 2026-02-04 00:55:14.751433 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.751437 | orchestrator | Wednesday 04 February 2026 00:49:23 +0000 (0:00:00.773) 0:04:54.922 **** 2026-02-04 00:55:14.751441 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.751446 | orchestrator | 2026-02-04 00:55:14.751449 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.751453 | orchestrator | Wednesday 04 February 2026 00:49:23 +0000 (0:00:00.462) 0:04:55.384 **** 2026-02-04 00:55:14.751457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.751461 | orchestrator | 2026-02-04 00:55:14.751465 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.751468 | orchestrator | Wednesday 04 February 2026 00:49:24 +0000 (0:00:00.658) 0:04:56.043 **** 2026-02-04 00:55:14.751472 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751476 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751480 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751484 | orchestrator | 2026-02-04 00:55:14.751487 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.751491 | orchestrator | Wednesday 04 February 2026 00:49:24 +0000 (0:00:00.717) 0:04:56.760 **** 2026-02-04 00:55:14.751495 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751499 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751503 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751506 | orchestrator | 2026-02-04 00:55:14.751510 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.751514 | orchestrator | Wednesday 04 February 2026 00:49:25 +0000 (0:00:00.260) 0:04:57.020 **** 2026-02-04 00:55:14.751518 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751522 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751525 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751529 | orchestrator | 2026-02-04 00:55:14.751533 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.751537 | orchestrator | Wednesday 04 February 2026 00:49:25 +0000 (0:00:00.409) 0:04:57.430 **** 2026-02-04 00:55:14.751541 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751545 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751549 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751552 | orchestrator | 2026-02-04 00:55:14.751559 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.751563 | orchestrator | Wednesday 04 February 2026 00:49:25 +0000 (0:00:00.281) 0:04:57.711 **** 2026-02-04 00:55:14.751567 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751571 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751575 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751579 | orchestrator | 2026-02-04 00:55:14.751582 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.751586 | orchestrator | Wednesday 04 February 2026 00:49:26 +0000 (0:00:00.627) 0:04:58.338 **** 2026-02-04 00:55:14.751590 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751594 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751598 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751602 | orchestrator | 2026-02-04 00:55:14.751605 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.751609 | orchestrator | Wednesday 04 February 2026 00:49:26 +0000 (0:00:00.293) 0:04:58.631 **** 2026-02-04 00:55:14.751613 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751617 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751636 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751640 | orchestrator | 2026-02-04 00:55:14.751644 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.751648 | orchestrator | Wednesday 04 February 2026 00:49:27 +0000 (0:00:00.269) 0:04:58.901 **** 2026-02-04 00:55:14.751652 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751655 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751659 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751663 | orchestrator | 2026-02-04 00:55:14.751667 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.751670 | orchestrator | Wednesday 04 February 2026 00:49:27 +0000 (0:00:00.828) 0:04:59.730 **** 2026-02-04 00:55:14.751674 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751678 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751682 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751685 | orchestrator | 2026-02-04 00:55:14.751689 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.751693 | orchestrator | Wednesday 04 February 2026 00:49:28 +0000 (0:00:00.670) 0:05:00.400 **** 2026-02-04 00:55:14.751697 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751700 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751704 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751708 | orchestrator | 2026-02-04 00:55:14.751711 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.751715 | orchestrator | Wednesday 04 February 2026 00:49:28 +0000 (0:00:00.290) 0:05:00.691 **** 2026-02-04 00:55:14.751719 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751723 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751727 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751730 | orchestrator | 2026-02-04 00:55:14.751734 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.751738 | orchestrator | Wednesday 04 February 2026 00:49:29 +0000 (0:00:00.337) 0:05:01.028 **** 2026-02-04 00:55:14.751741 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751745 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751749 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751753 | orchestrator | 2026-02-04 00:55:14.751756 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.751773 | orchestrator | Wednesday 04 February 2026 00:49:29 +0000 (0:00:00.663) 0:05:01.691 **** 2026-02-04 00:55:14.751778 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751781 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751785 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751789 | orchestrator | 2026-02-04 00:55:14.751793 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.751796 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:00.593) 0:05:02.285 **** 2026-02-04 00:55:14.751800 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751804 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751808 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751812 | orchestrator | 2026-02-04 00:55:14.751815 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.751819 | orchestrator | Wednesday 04 February 2026 00:49:30 +0000 (0:00:00.311) 0:05:02.596 **** 2026-02-04 00:55:14.751823 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751827 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751830 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751834 | orchestrator | 2026-02-04 00:55:14.751838 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.751842 | orchestrator | Wednesday 04 February 2026 00:49:31 +0000 (0:00:00.336) 0:05:02.933 **** 2026-02-04 00:55:14.751845 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.751849 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.751853 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.751857 | orchestrator | 2026-02-04 00:55:14.751865 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.751869 | orchestrator | Wednesday 04 February 2026 00:49:31 +0000 (0:00:00.549) 0:05:03.483 **** 2026-02-04 00:55:14.751873 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751876 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751880 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751884 | orchestrator | 2026-02-04 00:55:14.751888 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.751891 | orchestrator | Wednesday 04 February 2026 00:49:32 +0000 (0:00:00.361) 0:05:03.844 **** 2026-02-04 00:55:14.751895 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751899 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751903 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751907 | orchestrator | 2026-02-04 00:55:14.751911 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.751914 | orchestrator | Wednesday 04 February 2026 00:49:32 +0000 (0:00:00.334) 0:05:04.178 **** 2026-02-04 00:55:14.751918 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.751922 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.751926 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.751930 | orchestrator | 2026-02-04 00:55:14.751934 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-04 00:55:14.751940 | orchestrator | Wednesday 04 February 2026 00:49:33 +0000 (0:00:00.762) 0:05:04.941 **** 2026-02-04 00:55:14.751944 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 00:55:14.751948 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:55:14.751952 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:55:14.751956 | orchestrator | 2026-02-04 00:55:14.751960 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-04 00:55:14.751964 | orchestrator | Wednesday 04 February 2026 00:49:33 +0000 (0:00:00.665) 0:05:05.606 **** 2026-02-04 00:55:14.751968 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.751972 | orchestrator | 2026-02-04 00:55:14.751975 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-04 00:55:14.751979 | orchestrator | Wednesday 04 February 2026 00:49:34 +0000 (0:00:00.530) 0:05:06.137 **** 2026-02-04 00:55:14.751983 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.751987 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.751990 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.751994 | orchestrator | 2026-02-04 00:55:14.751998 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-04 00:55:14.752002 | orchestrator | Wednesday 04 February 2026 00:49:34 +0000 (0:00:00.647) 0:05:06.784 **** 2026-02-04 00:55:14.752005 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752009 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.752013 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.752017 | orchestrator | 2026-02-04 00:55:14.752020 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-04 00:55:14.752024 | orchestrator | Wednesday 04 February 2026 00:49:35 +0000 (0:00:00.536) 0:05:07.320 **** 2026-02-04 00:55:14.752028 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:55:14.752032 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:55:14.752036 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:55:14.752040 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-04 00:55:14.752043 | orchestrator | 2026-02-04 00:55:14.752047 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-04 00:55:14.752051 | orchestrator | Wednesday 04 February 2026 00:49:46 +0000 (0:00:10.812) 0:05:18.132 **** 2026-02-04 00:55:14.752055 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.752058 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.752078 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.752082 | orchestrator | 2026-02-04 00:55:14.752086 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-04 00:55:14.752089 | orchestrator | Wednesday 04 February 2026 00:49:46 +0000 (0:00:00.310) 0:05:18.443 **** 2026-02-04 00:55:14.752093 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 00:55:14.752097 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 00:55:14.752101 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 00:55:14.752105 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 00:55:14.752108 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.752142 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.752147 | orchestrator | 2026-02-04 00:55:14.752151 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-04 00:55:14.752155 | orchestrator | Wednesday 04 February 2026 00:49:49 +0000 (0:00:02.524) 0:05:20.968 **** 2026-02-04 00:55:14.752159 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 00:55:14.752163 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 00:55:14.752167 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 00:55:14.752171 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 00:55:14.752175 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-04 00:55:14.752178 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-04 00:55:14.752182 | orchestrator | 2026-02-04 00:55:14.752186 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-04 00:55:14.752190 | orchestrator | Wednesday 04 February 2026 00:49:50 +0000 (0:00:01.136) 0:05:22.104 **** 2026-02-04 00:55:14.752194 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.752198 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.752202 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.752205 | orchestrator | 2026-02-04 00:55:14.752209 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-04 00:55:14.752213 | orchestrator | Wednesday 04 February 2026 00:49:51 +0000 (0:00:00.863) 0:05:22.968 **** 2026-02-04 00:55:14.752217 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752221 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.752225 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.752228 | orchestrator | 2026-02-04 00:55:14.752232 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-04 00:55:14.752236 | orchestrator | Wednesday 04 February 2026 00:49:51 +0000 (0:00:00.255) 0:05:23.223 **** 2026-02-04 00:55:14.752240 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752244 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.752247 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.752251 | orchestrator | 2026-02-04 00:55:14.752255 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-04 00:55:14.752259 | orchestrator | Wednesday 04 February 2026 00:49:51 +0000 (0:00:00.251) 0:05:23.474 **** 2026-02-04 00:55:14.752263 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.752267 | orchestrator | 2026-02-04 00:55:14.752271 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-04 00:55:14.752274 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:00.569) 0:05:24.044 **** 2026-02-04 00:55:14.752278 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752282 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.752291 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.752295 | orchestrator | 2026-02-04 00:55:14.752299 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-04 00:55:14.752303 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:00.301) 0:05:24.345 **** 2026-02-04 00:55:14.752307 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752315 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.752318 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.752322 | orchestrator | 2026-02-04 00:55:14.752326 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-04 00:55:14.752330 | orchestrator | Wednesday 04 February 2026 00:49:52 +0000 (0:00:00.264) 0:05:24.610 **** 2026-02-04 00:55:14.752334 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.752338 | orchestrator | 2026-02-04 00:55:14.752342 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-04 00:55:14.752345 | orchestrator | Wednesday 04 February 2026 00:49:53 +0000 (0:00:00.645) 0:05:25.255 **** 2026-02-04 00:55:14.752349 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.752353 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.752357 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.752361 | orchestrator | 2026-02-04 00:55:14.752365 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-04 00:55:14.752368 | orchestrator | Wednesday 04 February 2026 00:49:54 +0000 (0:00:01.228) 0:05:26.484 **** 2026-02-04 00:55:14.752372 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.752376 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.752380 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.752384 | orchestrator | 2026-02-04 00:55:14.752387 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-04 00:55:14.752391 | orchestrator | Wednesday 04 February 2026 00:49:55 +0000 (0:00:01.152) 0:05:27.637 **** 2026-02-04 00:55:14.752395 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.752399 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.752403 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.752406 | orchestrator | 2026-02-04 00:55:14.752410 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-04 00:55:14.752414 | orchestrator | Wednesday 04 February 2026 00:49:57 +0000 (0:00:01.894) 0:05:29.531 **** 2026-02-04 00:55:14.752418 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.752422 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.752426 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.752429 | orchestrator | 2026-02-04 00:55:14.752433 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-04 00:55:14.752437 | orchestrator | Wednesday 04 February 2026 00:49:59 +0000 (0:00:01.981) 0:05:31.513 **** 2026-02-04 00:55:14.752441 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752445 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.752449 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-04 00:55:14.752453 | orchestrator | 2026-02-04 00:55:14.752456 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-04 00:55:14.752460 | orchestrator | Wednesday 04 February 2026 00:50:00 +0000 (0:00:00.618) 0:05:32.132 **** 2026-02-04 00:55:14.752477 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-04 00:55:14.752482 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-04 00:55:14.752486 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-04 00:55:14.752489 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-04 00:55:14.752493 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-04 00:55:14.752497 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-02-04 00:55:14.752501 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.752505 | orchestrator | 2026-02-04 00:55:14.752509 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-04 00:55:14.752516 | orchestrator | Wednesday 04 February 2026 00:50:36 +0000 (0:00:36.229) 0:06:08.361 **** 2026-02-04 00:55:14.752520 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.752524 | orchestrator | 2026-02-04 00:55:14.752528 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-04 00:55:14.752532 | orchestrator | Wednesday 04 February 2026 00:50:37 +0000 (0:00:01.248) 0:06:09.609 **** 2026-02-04 00:55:14.752536 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.752539 | orchestrator | 2026-02-04 00:55:14.752543 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-04 00:55:14.752547 | orchestrator | Wednesday 04 February 2026 00:50:38 +0000 (0:00:00.280) 0:06:09.889 **** 2026-02-04 00:55:14.752551 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.752555 | orchestrator | 2026-02-04 00:55:14.752559 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-04 00:55:14.752563 | orchestrator | Wednesday 04 February 2026 00:50:38 +0000 (0:00:00.137) 0:06:10.026 **** 2026-02-04 00:55:14.752567 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-04 00:55:14.752570 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-04 00:55:14.752574 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-04 00:55:14.752578 | orchestrator | 2026-02-04 00:55:14.752582 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-04 00:55:14.752591 | orchestrator | Wednesday 04 February 2026 00:50:44 +0000 (0:00:06.444) 0:06:16.471 **** 2026-02-04 00:55:14.752595 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-04 00:55:14.752599 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-04 00:55:14.752602 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-04 00:55:14.752606 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-04 00:55:14.752610 | orchestrator | 2026-02-04 00:55:14.752614 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 00:55:14.752618 | orchestrator | Wednesday 04 February 2026 00:50:49 +0000 (0:00:05.285) 0:06:21.756 **** 2026-02-04 00:55:14.752622 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.752625 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.752629 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.752633 | orchestrator | 2026-02-04 00:55:14.752637 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-04 00:55:14.752641 | orchestrator | Wednesday 04 February 2026 00:50:50 +0000 (0:00:00.786) 0:06:22.543 **** 2026-02-04 00:55:14.752645 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.752648 | orchestrator | 2026-02-04 00:55:14.752652 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-04 00:55:14.752656 | orchestrator | Wednesday 04 February 2026 00:50:51 +0000 (0:00:00.528) 0:06:23.072 **** 2026-02-04 00:55:14.752660 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.752664 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.752668 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.752672 | orchestrator | 2026-02-04 00:55:14.752676 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-04 00:55:14.752679 | orchestrator | Wednesday 04 February 2026 00:50:51 +0000 (0:00:00.548) 0:06:23.620 **** 2026-02-04 00:55:14.752683 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.752687 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.752691 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.752695 | orchestrator | 2026-02-04 00:55:14.752698 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-04 00:55:14.752702 | orchestrator | Wednesday 04 February 2026 00:50:53 +0000 (0:00:01.335) 0:06:24.955 **** 2026-02-04 00:55:14.752710 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 00:55:14.752714 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 00:55:14.752718 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 00:55:14.752722 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.752726 | orchestrator | 2026-02-04 00:55:14.752730 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-04 00:55:14.752733 | orchestrator | Wednesday 04 February 2026 00:50:53 +0000 (0:00:00.537) 0:06:25.492 **** 2026-02-04 00:55:14.752737 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.752741 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.752745 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.752749 | orchestrator | 2026-02-04 00:55:14.752753 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-04 00:55:14.752757 | orchestrator | 2026-02-04 00:55:14.752760 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.752777 | orchestrator | Wednesday 04 February 2026 00:50:54 +0000 (0:00:00.649) 0:06:26.142 **** 2026-02-04 00:55:14.752781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.752785 | orchestrator | 2026-02-04 00:55:14.752789 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.752793 | orchestrator | Wednesday 04 February 2026 00:50:54 +0000 (0:00:00.438) 0:06:26.580 **** 2026-02-04 00:55:14.752797 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.752801 | orchestrator | 2026-02-04 00:55:14.752804 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.752808 | orchestrator | Wednesday 04 February 2026 00:50:55 +0000 (0:00:00.644) 0:06:27.225 **** 2026-02-04 00:55:14.752812 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.752816 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.752820 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.752824 | orchestrator | 2026-02-04 00:55:14.752827 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.752831 | orchestrator | Wednesday 04 February 2026 00:50:55 +0000 (0:00:00.261) 0:06:27.487 **** 2026-02-04 00:55:14.752835 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.752839 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.752843 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.752846 | orchestrator | 2026-02-04 00:55:14.752850 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.752854 | orchestrator | Wednesday 04 February 2026 00:50:56 +0000 (0:00:00.819) 0:06:28.307 **** 2026-02-04 00:55:14.752858 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.752862 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.752866 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.752869 | orchestrator | 2026-02-04 00:55:14.752876 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.752882 | orchestrator | Wednesday 04 February 2026 00:50:57 +0000 (0:00:00.734) 0:06:29.042 **** 2026-02-04 00:55:14.752888 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.752894 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.752901 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.752907 | orchestrator | 2026-02-04 00:55:14.752913 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.752920 | orchestrator | Wednesday 04 February 2026 00:50:57 +0000 (0:00:00.746) 0:06:29.789 **** 2026-02-04 00:55:14.752926 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.752932 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.752937 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.752944 | orchestrator | 2026-02-04 00:55:14.752954 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.752964 | orchestrator | Wednesday 04 February 2026 00:50:58 +0000 (0:00:00.446) 0:06:30.235 **** 2026-02-04 00:55:14.752970 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.752976 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.752982 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.752988 | orchestrator | 2026-02-04 00:55:14.752994 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.753000 | orchestrator | Wednesday 04 February 2026 00:50:58 +0000 (0:00:00.272) 0:06:30.508 **** 2026-02-04 00:55:14.753006 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753012 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753018 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753024 | orchestrator | 2026-02-04 00:55:14.753030 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.753036 | orchestrator | Wednesday 04 February 2026 00:50:58 +0000 (0:00:00.255) 0:06:30.763 **** 2026-02-04 00:55:14.753042 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753048 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753055 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753061 | orchestrator | 2026-02-04 00:55:14.753067 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.753073 | orchestrator | Wednesday 04 February 2026 00:50:59 +0000 (0:00:00.846) 0:06:31.609 **** 2026-02-04 00:55:14.753079 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753085 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753091 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753097 | orchestrator | 2026-02-04 00:55:14.753102 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.753108 | orchestrator | Wednesday 04 February 2026 00:51:00 +0000 (0:00:01.096) 0:06:32.706 **** 2026-02-04 00:55:14.753114 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753132 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753138 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753144 | orchestrator | 2026-02-04 00:55:14.753149 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.753156 | orchestrator | Wednesday 04 February 2026 00:51:01 +0000 (0:00:00.316) 0:06:33.022 **** 2026-02-04 00:55:14.753161 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753167 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753173 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753179 | orchestrator | 2026-02-04 00:55:14.753185 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.753191 | orchestrator | Wednesday 04 February 2026 00:51:01 +0000 (0:00:00.290) 0:06:33.313 **** 2026-02-04 00:55:14.753197 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753202 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753208 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753214 | orchestrator | 2026-02-04 00:55:14.753220 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.753226 | orchestrator | Wednesday 04 February 2026 00:51:01 +0000 (0:00:00.304) 0:06:33.617 **** 2026-02-04 00:55:14.753232 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753238 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753244 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753250 | orchestrator | 2026-02-04 00:55:14.753256 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.753268 | orchestrator | Wednesday 04 February 2026 00:51:02 +0000 (0:00:00.588) 0:06:34.206 **** 2026-02-04 00:55:14.753275 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753281 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753287 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753293 | orchestrator | 2026-02-04 00:55:14.753298 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.753305 | orchestrator | Wednesday 04 February 2026 00:51:02 +0000 (0:00:00.328) 0:06:34.534 **** 2026-02-04 00:55:14.753318 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753324 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753330 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753336 | orchestrator | 2026-02-04 00:55:14.753342 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.753348 | orchestrator | Wednesday 04 February 2026 00:51:02 +0000 (0:00:00.288) 0:06:34.823 **** 2026-02-04 00:55:14.753354 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753361 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753366 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753370 | orchestrator | 2026-02-04 00:55:14.753373 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.753377 | orchestrator | Wednesday 04 February 2026 00:51:03 +0000 (0:00:00.290) 0:06:35.113 **** 2026-02-04 00:55:14.753381 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753385 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753389 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753392 | orchestrator | 2026-02-04 00:55:14.753396 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.753400 | orchestrator | Wednesday 04 February 2026 00:51:03 +0000 (0:00:00.451) 0:06:35.565 **** 2026-02-04 00:55:14.753404 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753408 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753412 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753415 | orchestrator | 2026-02-04 00:55:14.753419 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.753423 | orchestrator | Wednesday 04 February 2026 00:51:04 +0000 (0:00:00.290) 0:06:35.855 **** 2026-02-04 00:55:14.753427 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753431 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753434 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753438 | orchestrator | 2026-02-04 00:55:14.753442 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-04 00:55:14.753446 | orchestrator | Wednesday 04 February 2026 00:51:04 +0000 (0:00:00.500) 0:06:36.356 **** 2026-02-04 00:55:14.753450 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753454 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753457 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753461 | orchestrator | 2026-02-04 00:55:14.753465 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-04 00:55:14.753473 | orchestrator | Wednesday 04 February 2026 00:51:05 +0000 (0:00:00.478) 0:06:36.834 **** 2026-02-04 00:55:14.753477 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:55:14.753481 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:55:14.753485 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:55:14.753489 | orchestrator | 2026-02-04 00:55:14.753493 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-04 00:55:14.753497 | orchestrator | Wednesday 04 February 2026 00:51:05 +0000 (0:00:00.578) 0:06:37.413 **** 2026-02-04 00:55:14.753500 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.753504 | orchestrator | 2026-02-04 00:55:14.753508 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-04 00:55:14.753512 | orchestrator | Wednesday 04 February 2026 00:51:06 +0000 (0:00:00.446) 0:06:37.860 **** 2026-02-04 00:55:14.753516 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753519 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753523 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753527 | orchestrator | 2026-02-04 00:55:14.753531 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-04 00:55:14.753535 | orchestrator | Wednesday 04 February 2026 00:51:06 +0000 (0:00:00.392) 0:06:38.252 **** 2026-02-04 00:55:14.753543 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753547 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753551 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753555 | orchestrator | 2026-02-04 00:55:14.753559 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-04 00:55:14.753562 | orchestrator | Wednesday 04 February 2026 00:51:06 +0000 (0:00:00.267) 0:06:38.519 **** 2026-02-04 00:55:14.753566 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753570 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753574 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753578 | orchestrator | 2026-02-04 00:55:14.753581 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-04 00:55:14.753585 | orchestrator | Wednesday 04 February 2026 00:51:07 +0000 (0:00:00.644) 0:06:39.163 **** 2026-02-04 00:55:14.753589 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753593 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753597 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753600 | orchestrator | 2026-02-04 00:55:14.753604 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-04 00:55:14.753608 | orchestrator | Wednesday 04 February 2026 00:51:07 +0000 (0:00:00.285) 0:06:39.449 **** 2026-02-04 00:55:14.753612 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 00:55:14.753616 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 00:55:14.753620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 00:55:14.753630 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 00:55:14.753634 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 00:55:14.753638 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 00:55:14.753642 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 00:55:14.753646 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 00:55:14.753649 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 00:55:14.753653 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 00:55:14.753657 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 00:55:14.753661 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 00:55:14.753665 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 00:55:14.753668 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 00:55:14.753672 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 00:55:14.753676 | orchestrator | 2026-02-04 00:55:14.753680 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-04 00:55:14.753684 | orchestrator | Wednesday 04 February 2026 00:51:11 +0000 (0:00:03.489) 0:06:42.938 **** 2026-02-04 00:55:14.753687 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753691 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753695 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753699 | orchestrator | 2026-02-04 00:55:14.753703 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-04 00:55:14.753707 | orchestrator | Wednesday 04 February 2026 00:51:11 +0000 (0:00:00.267) 0:06:43.205 **** 2026-02-04 00:55:14.753710 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.753714 | orchestrator | 2026-02-04 00:55:14.753718 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-04 00:55:14.753727 | orchestrator | Wednesday 04 February 2026 00:51:11 +0000 (0:00:00.473) 0:06:43.679 **** 2026-02-04 00:55:14.753731 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 00:55:14.753737 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 00:55:14.753741 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 00:55:14.753745 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-04 00:55:14.753749 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-04 00:55:14.753753 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-04 00:55:14.753756 | orchestrator | 2026-02-04 00:55:14.753760 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-04 00:55:14.753764 | orchestrator | Wednesday 04 February 2026 00:51:13 +0000 (0:00:01.162) 0:06:44.841 **** 2026-02-04 00:55:14.753768 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.753772 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 00:55:14.753776 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:55:14.753779 | orchestrator | 2026-02-04 00:55:14.753783 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-04 00:55:14.753787 | orchestrator | Wednesday 04 February 2026 00:51:15 +0000 (0:00:02.210) 0:06:47.052 **** 2026-02-04 00:55:14.753791 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 00:55:14.753795 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 00:55:14.753798 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.753802 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 00:55:14.753806 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 00:55:14.753810 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.753814 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 00:55:14.753818 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 00:55:14.753821 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.753825 | orchestrator | 2026-02-04 00:55:14.753829 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-04 00:55:14.753833 | orchestrator | Wednesday 04 February 2026 00:51:16 +0000 (0:00:01.312) 0:06:48.365 **** 2026-02-04 00:55:14.753837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.753840 | orchestrator | 2026-02-04 00:55:14.753844 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-04 00:55:14.753848 | orchestrator | Wednesday 04 February 2026 00:51:18 +0000 (0:00:02.332) 0:06:50.697 **** 2026-02-04 00:55:14.753852 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.753856 | orchestrator | 2026-02-04 00:55:14.753860 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-04 00:55:14.753863 | orchestrator | Wednesday 04 February 2026 00:51:19 +0000 (0:00:00.501) 0:06:51.198 **** 2026-02-04 00:55:14.753868 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-29c6bc8c-f904-55ca-809f-6429b65a49e4', 'data_vg': 'ceph-29c6bc8c-f904-55ca-809f-6429b65a49e4'}) 2026-02-04 00:55:14.753873 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7', 'data_vg': 'ceph-6fbd78c3-b583-5fde-80ba-0c2cdf325dc7'}) 2026-02-04 00:55:14.753880 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-81b3d681-fa24-5b92-b5b8-11e84f5b22d9', 'data_vg': 'ceph-81b3d681-fa24-5b92-b5b8-11e84f5b22d9'}) 2026-02-04 00:55:14.753884 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1b7fb365-e96c-53e1-a018-1a0a8a845031', 'data_vg': 'ceph-1b7fb365-e96c-53e1-a018-1a0a8a845031'}) 2026-02-04 00:55:14.753888 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5659fb6c-b6d6-5368-9f3c-0e525a1333df', 'data_vg': 'ceph-5659fb6c-b6d6-5368-9f3c-0e525a1333df'}) 2026-02-04 00:55:14.753896 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd', 'data_vg': 'ceph-c6467dc2-49cb-511a-ae45-cb6bd8ce65cd'}) 2026-02-04 00:55:14.753900 | orchestrator | 2026-02-04 00:55:14.753904 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-04 00:55:14.753907 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:37.999) 0:07:29.197 **** 2026-02-04 00:55:14.753911 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.753915 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.753919 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.753923 | orchestrator | 2026-02-04 00:55:14.753926 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-04 00:55:14.753930 | orchestrator | Wednesday 04 February 2026 00:51:57 +0000 (0:00:00.313) 0:07:29.510 **** 2026-02-04 00:55:14.753934 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.753938 | orchestrator | 2026-02-04 00:55:14.753942 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-04 00:55:14.753946 | orchestrator | Wednesday 04 February 2026 00:51:58 +0000 (0:00:00.567) 0:07:30.078 **** 2026-02-04 00:55:14.753949 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753953 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753957 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753961 | orchestrator | 2026-02-04 00:55:14.753965 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-04 00:55:14.753968 | orchestrator | Wednesday 04 February 2026 00:51:59 +0000 (0:00:00.947) 0:07:31.026 **** 2026-02-04 00:55:14.753972 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.753976 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.753980 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.753983 | orchestrator | 2026-02-04 00:55:14.753987 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-04 00:55:14.753993 | orchestrator | Wednesday 04 February 2026 00:52:02 +0000 (0:00:02.839) 0:07:33.865 **** 2026-02-04 00:55:14.753997 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.754001 | orchestrator | 2026-02-04 00:55:14.754005 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-04 00:55:14.754009 | orchestrator | Wednesday 04 February 2026 00:52:02 +0000 (0:00:00.498) 0:07:34.364 **** 2026-02-04 00:55:14.754045 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.754051 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.754057 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.754063 | orchestrator | 2026-02-04 00:55:14.754069 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-04 00:55:14.754075 | orchestrator | Wednesday 04 February 2026 00:52:03 +0000 (0:00:01.434) 0:07:35.798 **** 2026-02-04 00:55:14.754080 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.754086 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.754092 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.754098 | orchestrator | 2026-02-04 00:55:14.754104 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-04 00:55:14.754110 | orchestrator | Wednesday 04 February 2026 00:52:05 +0000 (0:00:01.222) 0:07:37.020 **** 2026-02-04 00:55:14.754116 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.754166 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.754174 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.754178 | orchestrator | 2026-02-04 00:55:14.754182 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-04 00:55:14.754185 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:01.891) 0:07:38.912 **** 2026-02-04 00:55:14.754189 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754193 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754201 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754205 | orchestrator | 2026-02-04 00:55:14.754209 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-04 00:55:14.754213 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:00.276) 0:07:39.188 **** 2026-02-04 00:55:14.754217 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754220 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754224 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754228 | orchestrator | 2026-02-04 00:55:14.754232 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-04 00:55:14.754236 | orchestrator | Wednesday 04 February 2026 00:52:07 +0000 (0:00:00.416) 0:07:39.605 **** 2026-02-04 00:55:14.754240 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-04 00:55:14.754243 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-04 00:55:14.754247 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-04 00:55:14.754251 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 00:55:14.754255 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-04 00:55:14.754259 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-04 00:55:14.754262 | orchestrator | 2026-02-04 00:55:14.754266 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-04 00:55:14.754270 | orchestrator | Wednesday 04 February 2026 00:52:08 +0000 (0:00:00.989) 0:07:40.594 **** 2026-02-04 00:55:14.754274 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-04 00:55:14.754278 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-04 00:55:14.754286 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-04 00:55:14.754290 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-04 00:55:14.754294 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-04 00:55:14.754298 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-04 00:55:14.754302 | orchestrator | 2026-02-04 00:55:14.754306 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-04 00:55:14.754310 | orchestrator | Wednesday 04 February 2026 00:52:10 +0000 (0:00:02.069) 0:07:42.663 **** 2026-02-04 00:55:14.754313 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-04 00:55:14.754318 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-04 00:55:14.754321 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-04 00:55:14.754325 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-04 00:55:14.754329 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-04 00:55:14.754333 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-04 00:55:14.754337 | orchestrator | 2026-02-04 00:55:14.754340 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-04 00:55:14.754344 | orchestrator | Wednesday 04 February 2026 00:52:14 +0000 (0:00:03.704) 0:07:46.368 **** 2026-02-04 00:55:14.754348 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754352 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754356 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.754360 | orchestrator | 2026-02-04 00:55:14.754364 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-04 00:55:14.754367 | orchestrator | Wednesday 04 February 2026 00:52:17 +0000 (0:00:03.363) 0:07:49.732 **** 2026-02-04 00:55:14.754371 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754375 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754379 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-04 00:55:14.754383 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.754387 | orchestrator | 2026-02-04 00:55:14.754391 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-04 00:55:14.754394 | orchestrator | Wednesday 04 February 2026 00:52:30 +0000 (0:00:12.405) 0:08:02.137 **** 2026-02-04 00:55:14.754398 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754402 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754412 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754416 | orchestrator | 2026-02-04 00:55:14.754420 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 00:55:14.754424 | orchestrator | Wednesday 04 February 2026 00:52:31 +0000 (0:00:01.050) 0:08:03.187 **** 2026-02-04 00:55:14.754427 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754431 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754438 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754442 | orchestrator | 2026-02-04 00:55:14.754446 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-04 00:55:14.754450 | orchestrator | Wednesday 04 February 2026 00:52:31 +0000 (0:00:00.347) 0:08:03.535 **** 2026-02-04 00:55:14.754454 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.754458 | orchestrator | 2026-02-04 00:55:14.754462 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-04 00:55:14.754466 | orchestrator | Wednesday 04 February 2026 00:52:32 +0000 (0:00:00.498) 0:08:04.033 **** 2026-02-04 00:55:14.754469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.754474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.754480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.754486 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754491 | orchestrator | 2026-02-04 00:55:14.754503 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-04 00:55:14.754513 | orchestrator | Wednesday 04 February 2026 00:52:32 +0000 (0:00:00.653) 0:08:04.687 **** 2026-02-04 00:55:14.754519 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754524 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754531 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754536 | orchestrator | 2026-02-04 00:55:14.754543 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-04 00:55:14.754548 | orchestrator | Wednesday 04 February 2026 00:52:33 +0000 (0:00:00.579) 0:08:05.266 **** 2026-02-04 00:55:14.754554 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754559 | orchestrator | 2026-02-04 00:55:14.754565 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-04 00:55:14.754570 | orchestrator | Wednesday 04 February 2026 00:52:33 +0000 (0:00:00.223) 0:08:05.490 **** 2026-02-04 00:55:14.754576 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754582 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754587 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754593 | orchestrator | 2026-02-04 00:55:14.754599 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-04 00:55:14.754604 | orchestrator | Wednesday 04 February 2026 00:52:33 +0000 (0:00:00.305) 0:08:05.795 **** 2026-02-04 00:55:14.754610 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754615 | orchestrator | 2026-02-04 00:55:14.754620 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-04 00:55:14.754625 | orchestrator | Wednesday 04 February 2026 00:52:34 +0000 (0:00:00.232) 0:08:06.027 **** 2026-02-04 00:55:14.754631 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754637 | orchestrator | 2026-02-04 00:55:14.754643 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-04 00:55:14.754649 | orchestrator | Wednesday 04 February 2026 00:52:34 +0000 (0:00:00.232) 0:08:06.260 **** 2026-02-04 00:55:14.754655 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754661 | orchestrator | 2026-02-04 00:55:14.754667 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-04 00:55:14.754673 | orchestrator | Wednesday 04 February 2026 00:52:34 +0000 (0:00:00.126) 0:08:06.387 **** 2026-02-04 00:55:14.754685 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754691 | orchestrator | 2026-02-04 00:55:14.754697 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-04 00:55:14.754709 | orchestrator | Wednesday 04 February 2026 00:52:34 +0000 (0:00:00.196) 0:08:06.584 **** 2026-02-04 00:55:14.754715 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754721 | orchestrator | 2026-02-04 00:55:14.754728 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-04 00:55:14.754733 | orchestrator | Wednesday 04 February 2026 00:52:34 +0000 (0:00:00.203) 0:08:06.787 **** 2026-02-04 00:55:14.754737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.754741 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.754745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.754749 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754752 | orchestrator | 2026-02-04 00:55:14.754756 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-04 00:55:14.754760 | orchestrator | Wednesday 04 February 2026 00:52:35 +0000 (0:00:00.946) 0:08:07.733 **** 2026-02-04 00:55:14.754764 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754768 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754771 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754775 | orchestrator | 2026-02-04 00:55:14.754779 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-04 00:55:14.754783 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.284) 0:08:08.018 **** 2026-02-04 00:55:14.754787 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754790 | orchestrator | 2026-02-04 00:55:14.754794 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-04 00:55:14.754798 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.207) 0:08:08.225 **** 2026-02-04 00:55:14.754802 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754806 | orchestrator | 2026-02-04 00:55:14.754809 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-04 00:55:14.754813 | orchestrator | 2026-02-04 00:55:14.754817 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.754821 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.569) 0:08:08.795 **** 2026-02-04 00:55:14.754825 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.754830 | orchestrator | 2026-02-04 00:55:14.754834 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.754842 | orchestrator | Wednesday 04 February 2026 00:52:37 +0000 (0:00:01.001) 0:08:09.797 **** 2026-02-04 00:55:14.754846 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.754850 | orchestrator | 2026-02-04 00:55:14.754854 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.754857 | orchestrator | Wednesday 04 February 2026 00:52:38 +0000 (0:00:00.978) 0:08:10.775 **** 2026-02-04 00:55:14.754862 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.754866 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.754870 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.754873 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.754877 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.754881 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.754885 | orchestrator | 2026-02-04 00:55:14.754889 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.754893 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:01.024) 0:08:11.799 **** 2026-02-04 00:55:14.754896 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.754900 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.754904 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.754912 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.754916 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.754920 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.754923 | orchestrator | 2026-02-04 00:55:14.754927 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.754931 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.678) 0:08:12.478 **** 2026-02-04 00:55:14.754935 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.754939 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.754943 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.754946 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.754950 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.754954 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.754958 | orchestrator | 2026-02-04 00:55:14.754961 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.754965 | orchestrator | Wednesday 04 February 2026 00:52:41 +0000 (0:00:00.890) 0:08:13.368 **** 2026-02-04 00:55:14.754969 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.754973 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.754976 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.754980 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.754984 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.754988 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.754992 | orchestrator | 2026-02-04 00:55:14.754995 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.754999 | orchestrator | Wednesday 04 February 2026 00:52:42 +0000 (0:00:00.829) 0:08:14.198 **** 2026-02-04 00:55:14.755003 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755007 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755011 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755014 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755018 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755022 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755026 | orchestrator | 2026-02-04 00:55:14.755030 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.755036 | orchestrator | Wednesday 04 February 2026 00:52:43 +0000 (0:00:01.122) 0:08:15.320 **** 2026-02-04 00:55:14.755040 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755044 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755048 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755052 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755056 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755059 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755063 | orchestrator | 2026-02-04 00:55:14.755067 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.755071 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:00.533) 0:08:15.853 **** 2026-02-04 00:55:14.755075 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755078 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755082 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755086 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755090 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755093 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755097 | orchestrator | 2026-02-04 00:55:14.755101 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.755105 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:00.742) 0:08:16.596 **** 2026-02-04 00:55:14.755109 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755112 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755116 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755131 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755135 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755139 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755142 | orchestrator | 2026-02-04 00:55:14.755146 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.755154 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:01.061) 0:08:17.657 **** 2026-02-04 00:55:14.755158 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755162 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755166 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755169 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755173 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755177 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755181 | orchestrator | 2026-02-04 00:55:14.755185 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.755189 | orchestrator | Wednesday 04 February 2026 00:52:46 +0000 (0:00:01.153) 0:08:18.811 **** 2026-02-04 00:55:14.755192 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755196 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755200 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755204 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755208 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755211 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755215 | orchestrator | 2026-02-04 00:55:14.755219 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.755226 | orchestrator | Wednesday 04 February 2026 00:52:47 +0000 (0:00:00.472) 0:08:19.283 **** 2026-02-04 00:55:14.755229 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755233 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755237 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755242 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755249 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755255 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755261 | orchestrator | 2026-02-04 00:55:14.755270 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.755279 | orchestrator | Wednesday 04 February 2026 00:52:48 +0000 (0:00:00.771) 0:08:20.055 **** 2026-02-04 00:55:14.755285 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755291 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755297 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755303 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755309 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755315 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755321 | orchestrator | 2026-02-04 00:55:14.755328 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.755334 | orchestrator | Wednesday 04 February 2026 00:52:48 +0000 (0:00:00.526) 0:08:20.582 **** 2026-02-04 00:55:14.755340 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755346 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755352 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755358 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755364 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755370 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755376 | orchestrator | 2026-02-04 00:55:14.755383 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.755389 | orchestrator | Wednesday 04 February 2026 00:52:49 +0000 (0:00:00.650) 0:08:21.233 **** 2026-02-04 00:55:14.755395 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755402 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755408 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755414 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755421 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755428 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755432 | orchestrator | 2026-02-04 00:55:14.755435 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.755439 | orchestrator | Wednesday 04 February 2026 00:52:49 +0000 (0:00:00.494) 0:08:21.727 **** 2026-02-04 00:55:14.755443 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755447 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755456 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755460 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755464 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755468 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755471 | orchestrator | 2026-02-04 00:55:14.755475 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.755479 | orchestrator | Wednesday 04 February 2026 00:52:50 +0000 (0:00:00.636) 0:08:22.364 **** 2026-02-04 00:55:14.755483 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755487 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755491 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755494 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:14.755498 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:14.755502 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:14.755506 | orchestrator | 2026-02-04 00:55:14.755510 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.755518 | orchestrator | Wednesday 04 February 2026 00:52:51 +0000 (0:00:00.532) 0:08:22.896 **** 2026-02-04 00:55:14.755522 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755526 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755530 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755533 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755537 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755541 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755545 | orchestrator | 2026-02-04 00:55:14.755549 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.755553 | orchestrator | Wednesday 04 February 2026 00:52:51 +0000 (0:00:00.699) 0:08:23.596 **** 2026-02-04 00:55:14.755557 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755560 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755564 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755568 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755572 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755575 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755579 | orchestrator | 2026-02-04 00:55:14.755583 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.755587 | orchestrator | Wednesday 04 February 2026 00:52:52 +0000 (0:00:00.601) 0:08:24.197 **** 2026-02-04 00:55:14.755591 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755594 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755598 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755602 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755606 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755610 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755613 | orchestrator | 2026-02-04 00:55:14.755617 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-04 00:55:14.755621 | orchestrator | Wednesday 04 February 2026 00:52:53 +0000 (0:00:01.139) 0:08:25.336 **** 2026-02-04 00:55:14.755625 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.755629 | orchestrator | 2026-02-04 00:55:14.755633 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-04 00:55:14.755636 | orchestrator | Wednesday 04 February 2026 00:52:57 +0000 (0:00:04.034) 0:08:29.370 **** 2026-02-04 00:55:14.755640 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.755644 | orchestrator | 2026-02-04 00:55:14.755648 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-04 00:55:14.755652 | orchestrator | Wednesday 04 February 2026 00:52:59 +0000 (0:00:02.030) 0:08:31.401 **** 2026-02-04 00:55:14.755656 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.755659 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.755663 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.755667 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755671 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.755675 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.755682 | orchestrator | 2026-02-04 00:55:14.755689 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-04 00:55:14.755693 | orchestrator | Wednesday 04 February 2026 00:53:01 +0000 (0:00:01.612) 0:08:33.014 **** 2026-02-04 00:55:14.755697 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.755701 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.755705 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.755708 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.755712 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.755716 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.755720 | orchestrator | 2026-02-04 00:55:14.755723 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-04 00:55:14.755727 | orchestrator | Wednesday 04 February 2026 00:53:02 +0000 (0:00:01.056) 0:08:34.071 **** 2026-02-04 00:55:14.755731 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.755737 | orchestrator | 2026-02-04 00:55:14.755741 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-04 00:55:14.755745 | orchestrator | Wednesday 04 February 2026 00:53:03 +0000 (0:00:01.234) 0:08:35.306 **** 2026-02-04 00:55:14.755749 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.755752 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.755756 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.755760 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.755764 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.755768 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.755771 | orchestrator | 2026-02-04 00:55:14.755775 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-04 00:55:14.755779 | orchestrator | Wednesday 04 February 2026 00:53:05 +0000 (0:00:01.977) 0:08:37.283 **** 2026-02-04 00:55:14.755783 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.755787 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.755790 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.755794 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.755798 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.755802 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.755805 | orchestrator | 2026-02-04 00:55:14.755809 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-04 00:55:14.755813 | orchestrator | Wednesday 04 February 2026 00:53:09 +0000 (0:00:03.981) 0:08:41.265 **** 2026-02-04 00:55:14.755817 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:14.755821 | orchestrator | 2026-02-04 00:55:14.755825 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-04 00:55:14.755829 | orchestrator | Wednesday 04 February 2026 00:53:10 +0000 (0:00:01.285) 0:08:42.550 **** 2026-02-04 00:55:14.755832 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755836 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755840 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755844 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755848 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755851 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755855 | orchestrator | 2026-02-04 00:55:14.755859 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-04 00:55:14.755866 | orchestrator | Wednesday 04 February 2026 00:53:11 +0000 (0:00:00.873) 0:08:43.424 **** 2026-02-04 00:55:14.755869 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.755873 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.755877 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.755881 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:14.755885 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:14.755889 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:14.755895 | orchestrator | 2026-02-04 00:55:14.755899 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-04 00:55:14.755903 | orchestrator | Wednesday 04 February 2026 00:53:14 +0000 (0:00:02.504) 0:08:45.928 **** 2026-02-04 00:55:14.755907 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.755911 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.755915 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.755918 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:14.755922 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:14.755926 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:14.755930 | orchestrator | 2026-02-04 00:55:14.755933 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-04 00:55:14.755937 | orchestrator | 2026-02-04 00:55:14.755941 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.755945 | orchestrator | Wednesday 04 February 2026 00:53:15 +0000 (0:00:01.003) 0:08:46.932 **** 2026-02-04 00:55:14.755949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.755953 | orchestrator | 2026-02-04 00:55:14.755957 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.755960 | orchestrator | Wednesday 04 February 2026 00:53:15 +0000 (0:00:00.493) 0:08:47.426 **** 2026-02-04 00:55:14.755964 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.755968 | orchestrator | 2026-02-04 00:55:14.755972 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.755976 | orchestrator | Wednesday 04 February 2026 00:53:16 +0000 (0:00:00.596) 0:08:48.022 **** 2026-02-04 00:55:14.755979 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.755983 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.755987 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.755991 | orchestrator | 2026-02-04 00:55:14.755995 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.755999 | orchestrator | Wednesday 04 February 2026 00:53:16 +0000 (0:00:00.281) 0:08:48.303 **** 2026-02-04 00:55:14.756002 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756006 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756010 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756017 | orchestrator | 2026-02-04 00:55:14.756020 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.756024 | orchestrator | Wednesday 04 February 2026 00:53:17 +0000 (0:00:00.694) 0:08:48.998 **** 2026-02-04 00:55:14.756028 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756032 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756036 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756039 | orchestrator | 2026-02-04 00:55:14.756043 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.756047 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.842) 0:08:49.841 **** 2026-02-04 00:55:14.756051 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756055 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756059 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756062 | orchestrator | 2026-02-04 00:55:14.756066 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.756070 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.614) 0:08:50.456 **** 2026-02-04 00:55:14.756074 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756078 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756082 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756085 | orchestrator | 2026-02-04 00:55:14.756089 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.756093 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:00.274) 0:08:50.731 **** 2026-02-04 00:55:14.756097 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756106 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756110 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756113 | orchestrator | 2026-02-04 00:55:14.756117 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.756134 | orchestrator | Wednesday 04 February 2026 00:53:19 +0000 (0:00:00.254) 0:08:50.985 **** 2026-02-04 00:55:14.756138 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756141 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756145 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756149 | orchestrator | 2026-02-04 00:55:14.756153 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.756157 | orchestrator | Wednesday 04 February 2026 00:53:19 +0000 (0:00:00.441) 0:08:51.427 **** 2026-02-04 00:55:14.756160 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756164 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756168 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756172 | orchestrator | 2026-02-04 00:55:14.756176 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.756180 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:00.633) 0:08:52.060 **** 2026-02-04 00:55:14.756183 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756187 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756191 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756195 | orchestrator | 2026-02-04 00:55:14.756199 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.756202 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:00.612) 0:08:52.672 **** 2026-02-04 00:55:14.756206 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756210 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756214 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756218 | orchestrator | 2026-02-04 00:55:14.756221 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.756228 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.276) 0:08:52.949 **** 2026-02-04 00:55:14.756232 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756236 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756240 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756244 | orchestrator | 2026-02-04 00:55:14.756248 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.756252 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.440) 0:08:53.390 **** 2026-02-04 00:55:14.756255 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756259 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756263 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756267 | orchestrator | 2026-02-04 00:55:14.756271 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.756274 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.318) 0:08:53.708 **** 2026-02-04 00:55:14.756278 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756282 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756286 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756290 | orchestrator | 2026-02-04 00:55:14.756294 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.756297 | orchestrator | Wednesday 04 February 2026 00:53:22 +0000 (0:00:00.292) 0:08:54.001 **** 2026-02-04 00:55:14.756301 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756305 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756309 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756312 | orchestrator | 2026-02-04 00:55:14.756316 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.756320 | orchestrator | Wednesday 04 February 2026 00:53:22 +0000 (0:00:00.266) 0:08:54.268 **** 2026-02-04 00:55:14.756324 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756328 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756331 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756340 | orchestrator | 2026-02-04 00:55:14.756344 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.756347 | orchestrator | Wednesday 04 February 2026 00:53:22 +0000 (0:00:00.421) 0:08:54.690 **** 2026-02-04 00:55:14.756351 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756355 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756359 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756363 | orchestrator | 2026-02-04 00:55:14.756366 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.756370 | orchestrator | Wednesday 04 February 2026 00:53:23 +0000 (0:00:00.291) 0:08:54.981 **** 2026-02-04 00:55:14.756374 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756378 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756381 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756385 | orchestrator | 2026-02-04 00:55:14.756392 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.756396 | orchestrator | Wednesday 04 February 2026 00:53:23 +0000 (0:00:00.274) 0:08:55.256 **** 2026-02-04 00:55:14.756400 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756403 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756407 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756411 | orchestrator | 2026-02-04 00:55:14.756415 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.756419 | orchestrator | Wednesday 04 February 2026 00:53:23 +0000 (0:00:00.266) 0:08:55.522 **** 2026-02-04 00:55:14.756423 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756426 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756430 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756434 | orchestrator | 2026-02-04 00:55:14.756438 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-04 00:55:14.756441 | orchestrator | Wednesday 04 February 2026 00:53:24 +0000 (0:00:00.739) 0:08:56.262 **** 2026-02-04 00:55:14.756445 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756449 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756453 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-04 00:55:14.756457 | orchestrator | 2026-02-04 00:55:14.756461 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-04 00:55:14.756465 | orchestrator | Wednesday 04 February 2026 00:53:24 +0000 (0:00:00.331) 0:08:56.593 **** 2026-02-04 00:55:14.756468 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.756480 | orchestrator | 2026-02-04 00:55:14.756484 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-04 00:55:14.756488 | orchestrator | Wednesday 04 February 2026 00:53:27 +0000 (0:00:02.450) 0:08:59.043 **** 2026-02-04 00:55:14.756494 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-04 00:55:14.756500 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756504 | orchestrator | 2026-02-04 00:55:14.756508 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-04 00:55:14.756512 | orchestrator | Wednesday 04 February 2026 00:53:27 +0000 (0:00:00.183) 0:08:59.227 **** 2026-02-04 00:55:14.756517 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:55:14.756526 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:55:14.756536 | orchestrator | 2026-02-04 00:55:14.756543 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-04 00:55:14.756547 | orchestrator | Wednesday 04 February 2026 00:53:36 +0000 (0:00:09.099) 0:09:08.326 **** 2026-02-04 00:55:14.756550 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 00:55:14.756554 | orchestrator | 2026-02-04 00:55:14.756558 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-04 00:55:14.756562 | orchestrator | Wednesday 04 February 2026 00:53:40 +0000 (0:00:03.630) 0:09:11.956 **** 2026-02-04 00:55:14.756566 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.756570 | orchestrator | 2026-02-04 00:55:14.756573 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-04 00:55:14.756577 | orchestrator | Wednesday 04 February 2026 00:53:40 +0000 (0:00:00.453) 0:09:12.410 **** 2026-02-04 00:55:14.756581 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 00:55:14.756585 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 00:55:14.756589 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 00:55:14.756592 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-04 00:55:14.756596 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-04 00:55:14.756600 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-04 00:55:14.756604 | orchestrator | 2026-02-04 00:55:14.756608 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-04 00:55:14.756612 | orchestrator | Wednesday 04 February 2026 00:53:41 +0000 (0:00:00.984) 0:09:13.395 **** 2026-02-04 00:55:14.756615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.756619 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 00:55:14.756623 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:55:14.756627 | orchestrator | 2026-02-04 00:55:14.756631 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-04 00:55:14.756635 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:02.860) 0:09:16.255 **** 2026-02-04 00:55:14.756639 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 00:55:14.756643 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 00:55:14.756646 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756650 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 00:55:14.756657 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 00:55:14.756661 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756664 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 00:55:14.756668 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 00:55:14.756672 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756676 | orchestrator | 2026-02-04 00:55:14.756680 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-04 00:55:14.756683 | orchestrator | Wednesday 04 February 2026 00:53:45 +0000 (0:00:01.278) 0:09:17.534 **** 2026-02-04 00:55:14.756687 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756691 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756695 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756699 | orchestrator | 2026-02-04 00:55:14.756703 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-04 00:55:14.756706 | orchestrator | Wednesday 04 February 2026 00:53:47 +0000 (0:00:02.288) 0:09:19.823 **** 2026-02-04 00:55:14.756710 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.756714 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.756718 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.756721 | orchestrator | 2026-02-04 00:55:14.756725 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-04 00:55:14.756732 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:00.314) 0:09:20.137 **** 2026-02-04 00:55:14.756736 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.756740 | orchestrator | 2026-02-04 00:55:14.756744 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-04 00:55:14.756748 | orchestrator | Wednesday 04 February 2026 00:53:48 +0000 (0:00:00.672) 0:09:20.809 **** 2026-02-04 00:55:14.756751 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.756755 | orchestrator | 2026-02-04 00:55:14.756759 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-04 00:55:14.756763 | orchestrator | Wednesday 04 February 2026 00:53:49 +0000 (0:00:00.484) 0:09:21.294 **** 2026-02-04 00:55:14.756767 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756771 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756774 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756778 | orchestrator | 2026-02-04 00:55:14.756782 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-04 00:55:14.756786 | orchestrator | Wednesday 04 February 2026 00:53:50 +0000 (0:00:01.112) 0:09:22.406 **** 2026-02-04 00:55:14.756789 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756793 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756797 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756801 | orchestrator | 2026-02-04 00:55:14.756805 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-04 00:55:14.756809 | orchestrator | Wednesday 04 February 2026 00:53:52 +0000 (0:00:01.485) 0:09:23.892 **** 2026-02-04 00:55:14.756812 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756816 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756820 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756824 | orchestrator | 2026-02-04 00:55:14.756828 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-04 00:55:14.756834 | orchestrator | Wednesday 04 February 2026 00:53:53 +0000 (0:00:01.778) 0:09:25.670 **** 2026-02-04 00:55:14.756838 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756843 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756850 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756857 | orchestrator | 2026-02-04 00:55:14.756863 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-04 00:55:14.756868 | orchestrator | Wednesday 04 February 2026 00:53:55 +0000 (0:00:01.913) 0:09:27.583 **** 2026-02-04 00:55:14.756874 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756880 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.756886 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.756892 | orchestrator | 2026-02-04 00:55:14.756898 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 00:55:14.756904 | orchestrator | Wednesday 04 February 2026 00:53:57 +0000 (0:00:01.252) 0:09:28.836 **** 2026-02-04 00:55:14.756911 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.756918 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.756925 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.756932 | orchestrator | 2026-02-04 00:55:14.756939 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-04 00:55:14.756946 | orchestrator | Wednesday 04 February 2026 00:53:57 +0000 (0:00:00.633) 0:09:29.470 **** 2026-02-04 00:55:14.756953 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.756960 | orchestrator | 2026-02-04 00:55:14.756967 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-04 00:55:14.756974 | orchestrator | Wednesday 04 February 2026 00:53:58 +0000 (0:00:00.695) 0:09:30.166 **** 2026-02-04 00:55:14.756981 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.756995 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757003 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757010 | orchestrator | 2026-02-04 00:55:14.757017 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-04 00:55:14.757024 | orchestrator | Wednesday 04 February 2026 00:53:58 +0000 (0:00:00.323) 0:09:30.490 **** 2026-02-04 00:55:14.757031 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.757038 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.757045 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.757052 | orchestrator | 2026-02-04 00:55:14.757060 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-04 00:55:14.757067 | orchestrator | Wednesday 04 February 2026 00:53:59 +0000 (0:00:01.125) 0:09:31.615 **** 2026-02-04 00:55:14.757074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.757081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.757091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.757099 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757106 | orchestrator | 2026-02-04 00:55:14.757113 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-04 00:55:14.757135 | orchestrator | Wednesday 04 February 2026 00:54:00 +0000 (0:00:00.856) 0:09:32.472 **** 2026-02-04 00:55:14.757142 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757149 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757156 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757163 | orchestrator | 2026-02-04 00:55:14.757170 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-04 00:55:14.757176 | orchestrator | 2026-02-04 00:55:14.757183 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 00:55:14.757190 | orchestrator | Wednesday 04 February 2026 00:54:01 +0000 (0:00:00.779) 0:09:33.252 **** 2026-02-04 00:55:14.757197 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.757204 | orchestrator | 2026-02-04 00:55:14.757211 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 00:55:14.757218 | orchestrator | Wednesday 04 February 2026 00:54:01 +0000 (0:00:00.479) 0:09:33.731 **** 2026-02-04 00:55:14.757225 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.757232 | orchestrator | 2026-02-04 00:55:14.757239 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 00:55:14.757246 | orchestrator | Wednesday 04 February 2026 00:54:02 +0000 (0:00:00.703) 0:09:34.435 **** 2026-02-04 00:55:14.757253 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757259 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757266 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757272 | orchestrator | 2026-02-04 00:55:14.757277 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 00:55:14.757283 | orchestrator | Wednesday 04 February 2026 00:54:02 +0000 (0:00:00.307) 0:09:34.743 **** 2026-02-04 00:55:14.757289 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757295 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757301 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757308 | orchestrator | 2026-02-04 00:55:14.757314 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 00:55:14.757320 | orchestrator | Wednesday 04 February 2026 00:54:03 +0000 (0:00:00.619) 0:09:35.362 **** 2026-02-04 00:55:14.757326 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757332 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757338 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757344 | orchestrator | 2026-02-04 00:55:14.757350 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 00:55:14.757356 | orchestrator | Wednesday 04 February 2026 00:54:04 +0000 (0:00:00.721) 0:09:36.083 **** 2026-02-04 00:55:14.757369 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757375 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757381 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757387 | orchestrator | 2026-02-04 00:55:14.757393 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 00:55:14.757399 | orchestrator | Wednesday 04 February 2026 00:54:05 +0000 (0:00:00.964) 0:09:37.047 **** 2026-02-04 00:55:14.757403 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757410 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757415 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757418 | orchestrator | 2026-02-04 00:55:14.757422 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 00:55:14.757426 | orchestrator | Wednesday 04 February 2026 00:54:05 +0000 (0:00:00.329) 0:09:37.376 **** 2026-02-04 00:55:14.757430 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757433 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757437 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757441 | orchestrator | 2026-02-04 00:55:14.757445 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 00:55:14.757449 | orchestrator | Wednesday 04 February 2026 00:54:05 +0000 (0:00:00.291) 0:09:37.668 **** 2026-02-04 00:55:14.757452 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757456 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757460 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757464 | orchestrator | 2026-02-04 00:55:14.757468 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 00:55:14.757472 | orchestrator | Wednesday 04 February 2026 00:54:06 +0000 (0:00:00.279) 0:09:37.947 **** 2026-02-04 00:55:14.757475 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757479 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757483 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757487 | orchestrator | 2026-02-04 00:55:14.757491 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 00:55:14.757494 | orchestrator | Wednesday 04 February 2026 00:54:07 +0000 (0:00:01.018) 0:09:38.966 **** 2026-02-04 00:55:14.757498 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757502 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757506 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757510 | orchestrator | 2026-02-04 00:55:14.757513 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 00:55:14.757517 | orchestrator | Wednesday 04 February 2026 00:54:07 +0000 (0:00:00.671) 0:09:39.638 **** 2026-02-04 00:55:14.757521 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757525 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757529 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757532 | orchestrator | 2026-02-04 00:55:14.757536 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 00:55:14.757540 | orchestrator | Wednesday 04 February 2026 00:54:08 +0000 (0:00:00.334) 0:09:39.973 **** 2026-02-04 00:55:14.757544 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757548 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757552 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757555 | orchestrator | 2026-02-04 00:55:14.757559 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 00:55:14.757563 | orchestrator | Wednesday 04 February 2026 00:54:08 +0000 (0:00:00.279) 0:09:40.253 **** 2026-02-04 00:55:14.757570 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757574 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757578 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757582 | orchestrator | 2026-02-04 00:55:14.757585 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 00:55:14.757589 | orchestrator | Wednesday 04 February 2026 00:54:09 +0000 (0:00:00.590) 0:09:40.843 **** 2026-02-04 00:55:14.757593 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757601 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757605 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757609 | orchestrator | 2026-02-04 00:55:14.757612 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 00:55:14.757616 | orchestrator | Wednesday 04 February 2026 00:54:09 +0000 (0:00:00.336) 0:09:41.180 **** 2026-02-04 00:55:14.757620 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757624 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757628 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757631 | orchestrator | 2026-02-04 00:55:14.757635 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 00:55:14.757639 | orchestrator | Wednesday 04 February 2026 00:54:09 +0000 (0:00:00.317) 0:09:41.497 **** 2026-02-04 00:55:14.757643 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757647 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757650 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757654 | orchestrator | 2026-02-04 00:55:14.757658 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 00:55:14.757662 | orchestrator | Wednesday 04 February 2026 00:54:09 +0000 (0:00:00.299) 0:09:41.797 **** 2026-02-04 00:55:14.757666 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757670 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757673 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757679 | orchestrator | 2026-02-04 00:55:14.757685 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 00:55:14.757691 | orchestrator | Wednesday 04 February 2026 00:54:10 +0000 (0:00:00.517) 0:09:42.314 **** 2026-02-04 00:55:14.757697 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757703 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757710 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757716 | orchestrator | 2026-02-04 00:55:14.757723 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 00:55:14.757729 | orchestrator | Wednesday 04 February 2026 00:54:10 +0000 (0:00:00.358) 0:09:42.673 **** 2026-02-04 00:55:14.757735 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757741 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757747 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757754 | orchestrator | 2026-02-04 00:55:14.757759 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 00:55:14.757762 | orchestrator | Wednesday 04 February 2026 00:54:11 +0000 (0:00:00.336) 0:09:43.009 **** 2026-02-04 00:55:14.757766 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.757770 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.757774 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.757778 | orchestrator | 2026-02-04 00:55:14.757781 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-04 00:55:14.757785 | orchestrator | Wednesday 04 February 2026 00:54:11 +0000 (0:00:00.726) 0:09:43.736 **** 2026-02-04 00:55:14.757793 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.757797 | orchestrator | 2026-02-04 00:55:14.757800 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-04 00:55:14.757804 | orchestrator | Wednesday 04 February 2026 00:54:12 +0000 (0:00:00.499) 0:09:44.235 **** 2026-02-04 00:55:14.757808 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.757812 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 00:55:14.757816 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:55:14.757820 | orchestrator | 2026-02-04 00:55:14.757823 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-04 00:55:14.757827 | orchestrator | Wednesday 04 February 2026 00:54:14 +0000 (0:00:02.307) 0:09:46.543 **** 2026-02-04 00:55:14.757831 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 00:55:14.757835 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 00:55:14.757843 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.757847 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 00:55:14.757851 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 00:55:14.757855 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.757858 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 00:55:14.757862 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 00:55:14.757866 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.757870 | orchestrator | 2026-02-04 00:55:14.757874 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-04 00:55:14.757878 | orchestrator | Wednesday 04 February 2026 00:54:16 +0000 (0:00:01.503) 0:09:48.046 **** 2026-02-04 00:55:14.757881 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.757885 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.757889 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.757893 | orchestrator | 2026-02-04 00:55:14.757897 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-04 00:55:14.757900 | orchestrator | Wednesday 04 February 2026 00:54:16 +0000 (0:00:00.314) 0:09:48.361 **** 2026-02-04 00:55:14.757904 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.757908 | orchestrator | 2026-02-04 00:55:14.757912 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-04 00:55:14.757916 | orchestrator | Wednesday 04 February 2026 00:54:17 +0000 (0:00:00.496) 0:09:48.858 **** 2026-02-04 00:55:14.757920 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.757927 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.757931 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.757935 | orchestrator | 2026-02-04 00:55:14.757939 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-04 00:55:14.757943 | orchestrator | Wednesday 04 February 2026 00:54:18 +0000 (0:00:01.227) 0:09:50.085 **** 2026-02-04 00:55:14.757947 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.757951 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 00:55:14.757954 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.757958 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 00:55:14.757962 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.757966 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 00:55:14.757970 | orchestrator | 2026-02-04 00:55:14.757976 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-04 00:55:14.757982 | orchestrator | Wednesday 04 February 2026 00:54:22 +0000 (0:00:04.539) 0:09:54.625 **** 2026-02-04 00:55:14.757988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.757995 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:55:14.758001 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.758008 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:55:14.758051 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:55:14.758065 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:55:14.758088 | orchestrator | 2026-02-04 00:55:14.758095 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-04 00:55:14.758102 | orchestrator | Wednesday 04 February 2026 00:54:25 +0000 (0:00:02.407) 0:09:57.032 **** 2026-02-04 00:55:14.758109 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 00:55:14.758116 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.758164 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 00:55:14.758172 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.758179 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 00:55:14.758186 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.758193 | orchestrator | 2026-02-04 00:55:14.758205 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-04 00:55:14.758212 | orchestrator | Wednesday 04 February 2026 00:54:26 +0000 (0:00:01.243) 0:09:58.276 **** 2026-02-04 00:55:14.758219 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-04 00:55:14.758226 | orchestrator | 2026-02-04 00:55:14.758233 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-04 00:55:14.758240 | orchestrator | Wednesday 04 February 2026 00:54:26 +0000 (0:00:00.196) 0:09:58.473 **** 2026-02-04 00:55:14.758247 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758283 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758290 | orchestrator | 2026-02-04 00:55:14.758297 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-04 00:55:14.758304 | orchestrator | Wednesday 04 February 2026 00:54:27 +0000 (0:00:00.863) 0:09:59.336 **** 2026-02-04 00:55:14.758311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 00:55:14.758349 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758356 | orchestrator | 2026-02-04 00:55:14.758363 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-04 00:55:14.758370 | orchestrator | Wednesday 04 February 2026 00:54:28 +0000 (0:00:00.528) 0:09:59.864 **** 2026-02-04 00:55:14.758377 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 00:55:14.758384 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 00:55:14.758396 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 00:55:14.758403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 00:55:14.758410 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 00:55:14.758417 | orchestrator | 2026-02-04 00:55:14.758424 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-04 00:55:14.758431 | orchestrator | Wednesday 04 February 2026 00:54:59 +0000 (0:00:31.632) 0:10:31.497 **** 2026-02-04 00:55:14.758438 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758445 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.758452 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.758459 | orchestrator | 2026-02-04 00:55:14.758465 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-04 00:55:14.758477 | orchestrator | Wednesday 04 February 2026 00:54:59 +0000 (0:00:00.308) 0:10:31.805 **** 2026-02-04 00:55:14.758483 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758491 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.758497 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.758504 | orchestrator | 2026-02-04 00:55:14.758511 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-04 00:55:14.758518 | orchestrator | Wednesday 04 February 2026 00:55:00 +0000 (0:00:00.334) 0:10:32.140 **** 2026-02-04 00:55:14.758525 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.758532 | orchestrator | 2026-02-04 00:55:14.758538 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-04 00:55:14.758545 | orchestrator | Wednesday 04 February 2026 00:55:01 +0000 (0:00:00.849) 0:10:32.990 **** 2026-02-04 00:55:14.758555 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.758562 | orchestrator | 2026-02-04 00:55:14.758569 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-04 00:55:14.758575 | orchestrator | Wednesday 04 February 2026 00:55:01 +0000 (0:00:00.576) 0:10:33.567 **** 2026-02-04 00:55:14.758582 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.758589 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.758596 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.758603 | orchestrator | 2026-02-04 00:55:14.758610 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-04 00:55:14.758617 | orchestrator | Wednesday 04 February 2026 00:55:03 +0000 (0:00:01.329) 0:10:34.896 **** 2026-02-04 00:55:14.758624 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.758630 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.758637 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.758644 | orchestrator | 2026-02-04 00:55:14.758651 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-04 00:55:14.758658 | orchestrator | Wednesday 04 February 2026 00:55:04 +0000 (0:00:01.537) 0:10:36.433 **** 2026-02-04 00:55:14.758665 | orchestrator | changed: [testbed-node-3] 2026-02-04 00:55:14.758671 | orchestrator | changed: [testbed-node-4] 2026-02-04 00:55:14.758679 | orchestrator | changed: [testbed-node-5] 2026-02-04 00:55:14.758685 | orchestrator | 2026-02-04 00:55:14.758692 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-04 00:55:14.758699 | orchestrator | Wednesday 04 February 2026 00:55:06 +0000 (0:00:01.957) 0:10:38.391 **** 2026-02-04 00:55:14.758706 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.758717 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.758725 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 00:55:14.758732 | orchestrator | 2026-02-04 00:55:14.758738 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 00:55:14.758745 | orchestrator | Wednesday 04 February 2026 00:55:09 +0000 (0:00:02.659) 0:10:41.051 **** 2026-02-04 00:55:14.758752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758759 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.758766 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.758773 | orchestrator | 2026-02-04 00:55:14.758780 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-04 00:55:14.758790 | orchestrator | Wednesday 04 February 2026 00:55:09 +0000 (0:00:00.326) 0:10:41.377 **** 2026-02-04 00:55:14.758796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:55:14.758803 | orchestrator | 2026-02-04 00:55:14.758809 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-04 00:55:14.758814 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:00.511) 0:10:41.888 **** 2026-02-04 00:55:14.758820 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.758826 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.758831 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.758837 | orchestrator | 2026-02-04 00:55:14.758843 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-04 00:55:14.758849 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:00.568) 0:10:42.457 **** 2026-02-04 00:55:14.758856 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758862 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:55:14.758869 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:55:14.758876 | orchestrator | 2026-02-04 00:55:14.758883 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-04 00:55:14.758890 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:00.341) 0:10:42.798 **** 2026-02-04 00:55:14.758896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:55:14.758903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:55:14.758909 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:55:14.758916 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:55:14.758922 | orchestrator | 2026-02-04 00:55:14.758929 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-04 00:55:14.758935 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:00.616) 0:10:43.415 **** 2026-02-04 00:55:14.758941 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:55:14.758947 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:55:14.758953 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:55:14.758960 | orchestrator | 2026-02-04 00:55:14.758964 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:55:14.758968 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-04 00:55:14.758972 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-04 00:55:14.758976 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-04 00:55:14.758980 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-04 00:55:14.758984 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-04 00:55:14.758998 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-04 00:55:14.759002 | orchestrator | 2026-02-04 00:55:14.759006 | orchestrator | 2026-02-04 00:55:14.759010 | orchestrator | 2026-02-04 00:55:14.759014 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:55:14.759018 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:00.231) 0:10:43.647 **** 2026-02-04 00:55:14.759022 | orchestrator | =============================================================================== 2026-02-04 00:55:14.759025 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 50.23s 2026-02-04 00:55:14.759029 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 38.00s 2026-02-04 00:55:14.759033 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.23s 2026-02-04 00:55:14.759037 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.63s 2026-02-04 00:55:14.759041 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.06s 2026-02-04 00:55:14.759045 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.44s 2026-02-04 00:55:14.759048 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.41s 2026-02-04 00:55:14.759052 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.81s 2026-02-04 00:55:14.759057 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.66s 2026-02-04 00:55:14.759063 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 9.10s 2026-02-04 00:55:14.759069 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.78s 2026-02-04 00:55:14.759074 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2026-02-04 00:55:14.759079 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.29s 2026-02-04 00:55:14.759085 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.54s 2026-02-04 00:55:14.759090 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.03s 2026-02-04 00:55:14.759096 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.98s 2026-02-04 00:55:14.759103 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.78s 2026-02-04 00:55:14.759109 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.70s 2026-02-04 00:55:14.759134 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.63s 2026-02-04 00:55:14.759143 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.49s 2026-02-04 00:55:14.759149 | orchestrator | 2026-02-04 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:17.784760 | orchestrator | 2026-02-04 00:55:17 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:17.786775 | orchestrator | 2026-02-04 00:55:17 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:17.786853 | orchestrator | 2026-02-04 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:20.829043 | orchestrator | 2026-02-04 00:55:20 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:20.830363 | orchestrator | 2026-02-04 00:55:20 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:20.830410 | orchestrator | 2026-02-04 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:23.870418 | orchestrator | 2026-02-04 00:55:23 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:23.871905 | orchestrator | 2026-02-04 00:55:23 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:23.871972 | orchestrator | 2026-02-04 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:26.916450 | orchestrator | 2026-02-04 00:55:26 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state STARTED 2026-02-04 00:55:26.917880 | orchestrator | 2026-02-04 00:55:26 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:26.917932 | orchestrator | 2026-02-04 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:29.954353 | orchestrator | 2026-02-04 00:55:29 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:29.959451 | orchestrator | 2026-02-04 00:55:29 | INFO  | Task b7959d77-c7c9-498c-8868-021a94de88e1 is in state SUCCESS 2026-02-04 00:55:29.960871 | orchestrator | 2026-02-04 00:55:29.960922 | orchestrator | 2026-02-04 00:55:29.960928 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-04 00:55:29.960934 | orchestrator | 2026-02-04 00:55:29.960938 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-04 00:55:29.960943 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.074) 0:00:00.074 **** 2026-02-04 00:55:29.960947 | orchestrator | ok: [localhost] => { 2026-02-04 00:55:29.960953 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-04 00:55:29.960958 | orchestrator | } 2026-02-04 00:55:29.960962 | orchestrator | 2026-02-04 00:55:29.960966 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-04 00:55:29.960970 | orchestrator | Wednesday 04 February 2026 00:52:36 +0000 (0:00:00.029) 0:00:00.104 **** 2026-02-04 00:55:29.960974 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-04 00:55:29.960979 | orchestrator | ...ignoring 2026-02-04 00:55:29.960983 | orchestrator | 2026-02-04 00:55:29.960987 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-04 00:55:29.960991 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:02.737) 0:00:02.842 **** 2026-02-04 00:55:29.960995 | orchestrator | skipping: [localhost] 2026-02-04 00:55:29.960999 | orchestrator | 2026-02-04 00:55:29.961013 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-04 00:55:29.961017 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:00.037) 0:00:02.879 **** 2026-02-04 00:55:29.961021 | orchestrator | ok: [localhost] 2026-02-04 00:55:29.961031 | orchestrator | 2026-02-04 00:55:29.961035 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:55:29.961039 | orchestrator | 2026-02-04 00:55:29.961043 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:55:29.961046 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:00.157) 0:00:03.037 **** 2026-02-04 00:55:29.961050 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.961054 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.961058 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.961062 | orchestrator | 2026-02-04 00:55:29.961066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:55:29.961070 | orchestrator | Wednesday 04 February 2026 00:52:39 +0000 (0:00:00.282) 0:00:03.320 **** 2026-02-04 00:55:29.961074 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-04 00:55:29.961078 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-04 00:55:29.961082 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-04 00:55:29.961086 | orchestrator | 2026-02-04 00:55:29.961090 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-04 00:55:29.961172 | orchestrator | 2026-02-04 00:55:29.961181 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-04 00:55:29.961185 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.443) 0:00:03.763 **** 2026-02-04 00:55:29.961206 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 00:55:29.961211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 00:55:29.961215 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 00:55:29.961219 | orchestrator | 2026-02-04 00:55:29.961232 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 00:55:29.961236 | orchestrator | Wednesday 04 February 2026 00:52:40 +0000 (0:00:00.391) 0:00:04.154 **** 2026-02-04 00:55:29.961240 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:29.961245 | orchestrator | 2026-02-04 00:55:29.961249 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-04 00:55:29.961253 | orchestrator | Wednesday 04 February 2026 00:52:41 +0000 (0:00:00.546) 0:00:04.701 **** 2026-02-04 00:55:29.961273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961297 | orchestrator | 2026-02-04 00:55:29.961304 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-04 00:55:29.961308 | orchestrator | Wednesday 04 February 2026 00:52:43 +0000 (0:00:02.824) 0:00:07.525 **** 2026-02-04 00:55:29.961312 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.961316 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.961320 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.961324 | orchestrator | 2026-02-04 00:55:29.961328 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-04 00:55:29.961331 | orchestrator | Wednesday 04 February 2026 00:52:44 +0000 (0:00:00.672) 0:00:08.198 **** 2026-02-04 00:55:29.961335 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.961339 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.961343 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.961347 | orchestrator | 2026-02-04 00:55:29.961353 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-04 00:55:29.961359 | orchestrator | Wednesday 04 February 2026 00:52:45 +0000 (0:00:01.285) 0:00:09.483 **** 2026-02-04 00:55:29.961369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961403 | orchestrator | 2026-02-04 00:55:29.961409 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-04 00:55:29.961418 | orchestrator | Wednesday 04 February 2026 00:52:49 +0000 (0:00:03.524) 0:00:13.008 **** 2026-02-04 00:55:29.961424 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.961430 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.961436 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.961443 | orchestrator | 2026-02-04 00:55:29.961454 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-04 00:55:29.961462 | orchestrator | Wednesday 04 February 2026 00:52:50 +0000 (0:00:01.165) 0:00:14.174 **** 2026-02-04 00:55:29.961484 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:29.961497 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.961503 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:29.961510 | orchestrator | 2026-02-04 00:55:29.961516 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 00:55:29.961522 | orchestrator | Wednesday 04 February 2026 00:52:54 +0000 (0:00:04.023) 0:00:18.198 **** 2026-02-04 00:55:29.961528 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:29.961535 | orchestrator | 2026-02-04 00:55:29.961542 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 00:55:29.961555 | orchestrator | Wednesday 04 February 2026 00:52:55 +0000 (0:00:00.435) 0:00:18.634 **** 2026-02-04 00:55:29.961570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961584 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.961592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961597 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.961606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961616 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.961621 | orchestrator | 2026-02-04 00:55:29.961625 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 00:55:29.961630 | orchestrator | Wednesday 04 February 2026 00:52:57 +0000 (0:00:02.479) 0:00:21.113 **** 2026-02-04 00:55:29.961637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961642 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.961651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961661 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.961665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961671 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.961675 | orchestrator | 2026-02-04 00:55:29.961682 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 00:55:29.961687 | orchestrator | Wednesday 04 February 2026 00:53:00 +0000 (0:00:02.537) 0:00:23.650 **** 2026-02-04 00:55:29.961695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961708 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.961713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961718 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.961725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 00:55:29.961730 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.961808 | orchestrator | 2026-02-04 00:55:29.961819 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-04 00:55:29.961823 | orchestrator | Wednesday 04 February 2026 00:53:03 +0000 (0:00:02.961) 0:00:26.612 **** 2026-02-04 00:55:29.961832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 00:55:29.961857 | orchestrator | 2026-02-04 00:55:29.961861 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-04 00:55:29.961865 | orchestrator | Wednesday 04 February 2026 00:53:06 +0000 (0:00:03.175) 0:00:29.788 **** 2026-02-04 00:55:29.961869 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.961873 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:29.961877 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:29.961881 | orchestrator | 2026-02-04 00:55:29.961885 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-04 00:55:29.961889 | orchestrator | Wednesday 04 February 2026 00:53:07 +0000 (0:00:00.772) 0:00:30.560 **** 2026-02-04 00:55:29.961893 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.961897 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.961900 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.961904 | orchestrator | 2026-02-04 00:55:29.961908 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-04 00:55:29.961912 | orchestrator | Wednesday 04 February 2026 00:53:07 +0000 (0:00:00.542) 0:00:31.103 **** 2026-02-04 00:55:29.961916 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.961920 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.961923 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.961927 | orchestrator | 2026-02-04 00:55:29.961931 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-04 00:55:29.961938 | orchestrator | Wednesday 04 February 2026 00:53:07 +0000 (0:00:00.315) 0:00:31.418 **** 2026-02-04 00:55:29.961943 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-04 00:55:29.961947 | orchestrator | ...ignoring 2026-02-04 00:55:29.961952 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-04 00:55:29.961958 | orchestrator | ...ignoring 2026-02-04 00:55:29.961965 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-04 00:55:29.961969 | orchestrator | ...ignoring 2026-02-04 00:55:29.961973 | orchestrator | 2026-02-04 00:55:29.961977 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-04 00:55:29.962071 | orchestrator | Wednesday 04 February 2026 00:53:18 +0000 (0:00:10.817) 0:00:42.236 **** 2026-02-04 00:55:29.962076 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962080 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.962084 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.962088 | orchestrator | 2026-02-04 00:55:29.962133 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-04 00:55:29.962139 | orchestrator | Wednesday 04 February 2026 00:53:19 +0000 (0:00:00.379) 0:00:42.616 **** 2026-02-04 00:55:29.962143 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962147 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962165 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962169 | orchestrator | 2026-02-04 00:55:29.962173 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-04 00:55:29.962177 | orchestrator | Wednesday 04 February 2026 00:53:19 +0000 (0:00:00.518) 0:00:43.134 **** 2026-02-04 00:55:29.962181 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962185 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962189 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962192 | orchestrator | 2026-02-04 00:55:29.962196 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-04 00:55:29.962200 | orchestrator | Wednesday 04 February 2026 00:53:19 +0000 (0:00:00.383) 0:00:43.517 **** 2026-02-04 00:55:29.962204 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962208 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962212 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962216 | orchestrator | 2026-02-04 00:55:29.962219 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-04 00:55:29.962229 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:00.390) 0:00:43.908 **** 2026-02-04 00:55:29.962233 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962237 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.962241 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.962245 | orchestrator | 2026-02-04 00:55:29.962249 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-04 00:55:29.962253 | orchestrator | Wednesday 04 February 2026 00:53:20 +0000 (0:00:00.362) 0:00:44.271 **** 2026-02-04 00:55:29.962257 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962260 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962264 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962268 | orchestrator | 2026-02-04 00:55:29.962272 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 00:55:29.962276 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.519) 0:00:44.790 **** 2026-02-04 00:55:29.962280 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962284 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962288 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-04 00:55:29.962292 | orchestrator | 2026-02-04 00:55:29.962295 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-04 00:55:29.962299 | orchestrator | Wednesday 04 February 2026 00:53:21 +0000 (0:00:00.335) 0:00:45.125 **** 2026-02-04 00:55:29.962303 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962307 | orchestrator | 2026-02-04 00:55:29.962311 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-04 00:55:29.962315 | orchestrator | Wednesday 04 February 2026 00:53:31 +0000 (0:00:10.094) 0:00:55.220 **** 2026-02-04 00:55:29.962318 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962322 | orchestrator | 2026-02-04 00:55:29.962326 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 00:55:29.962330 | orchestrator | Wednesday 04 February 2026 00:53:31 +0000 (0:00:00.128) 0:00:55.348 **** 2026-02-04 00:55:29.962334 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962338 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962347 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962351 | orchestrator | 2026-02-04 00:55:29.962354 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-04 00:55:29.962358 | orchestrator | Wednesday 04 February 2026 00:53:32 +0000 (0:00:00.945) 0:00:56.294 **** 2026-02-04 00:55:29.962362 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962366 | orchestrator | 2026-02-04 00:55:29.962370 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-04 00:55:29.962374 | orchestrator | Wednesday 04 February 2026 00:53:39 +0000 (0:00:06.556) 0:01:02.851 **** 2026-02-04 00:55:29.962378 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962382 | orchestrator | 2026-02-04 00:55:29.962386 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-04 00:55:29.962390 | orchestrator | Wednesday 04 February 2026 00:53:41 +0000 (0:00:01.703) 0:01:04.555 **** 2026-02-04 00:55:29.962393 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962397 | orchestrator | 2026-02-04 00:55:29.962401 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-04 00:55:29.962405 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:02.418) 0:01:06.973 **** 2026-02-04 00:55:29.962409 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962413 | orchestrator | 2026-02-04 00:55:29.962421 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-04 00:55:29.962425 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:00.129) 0:01:07.103 **** 2026-02-04 00:55:29.962429 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962433 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962436 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962440 | orchestrator | 2026-02-04 00:55:29.962444 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-04 00:55:29.962448 | orchestrator | Wednesday 04 February 2026 00:53:43 +0000 (0:00:00.288) 0:01:07.391 **** 2026-02-04 00:55:29.962452 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962456 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-04 00:55:29.962460 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:29.962464 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:29.962468 | orchestrator | 2026-02-04 00:55:29.962472 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 00:55:29.962478 | orchestrator | skipping: no hosts matched 2026-02-04 00:55:29.962485 | orchestrator | 2026-02-04 00:55:29.962491 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 00:55:29.962497 | orchestrator | 2026-02-04 00:55:29.962504 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 00:55:29.962510 | orchestrator | Wednesday 04 February 2026 00:53:44 +0000 (0:00:00.556) 0:01:07.948 **** 2026-02-04 00:55:29.962516 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:55:29.962522 | orchestrator | 2026-02-04 00:55:29.962528 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 00:55:29.962535 | orchestrator | Wednesday 04 February 2026 00:53:59 +0000 (0:00:14.967) 0:01:22.915 **** 2026-02-04 00:55:29.962541 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.962546 | orchestrator | 2026-02-04 00:55:29.962552 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 00:55:29.962559 | orchestrator | Wednesday 04 February 2026 00:54:14 +0000 (0:00:15.542) 0:01:38.458 **** 2026-02-04 00:55:29.962565 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.962571 | orchestrator | 2026-02-04 00:55:29.962577 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 00:55:29.962584 | orchestrator | 2026-02-04 00:55:29.962589 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 00:55:29.962597 | orchestrator | Wednesday 04 February 2026 00:54:17 +0000 (0:00:02.376) 0:01:40.834 **** 2026-02-04 00:55:29.962601 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:55:29.962609 | orchestrator | 2026-02-04 00:55:29.962613 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 00:55:29.962621 | orchestrator | Wednesday 04 February 2026 00:54:34 +0000 (0:00:17.203) 0:01:58.038 **** 2026-02-04 00:55:29.962625 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.962629 | orchestrator | 2026-02-04 00:55:29.962633 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 00:55:29.962636 | orchestrator | Wednesday 04 February 2026 00:54:51 +0000 (0:00:16.638) 0:02:14.677 **** 2026-02-04 00:55:29.962641 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.962644 | orchestrator | 2026-02-04 00:55:29.962648 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-04 00:55:29.962652 | orchestrator | 2026-02-04 00:55:29.962656 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 00:55:29.962660 | orchestrator | Wednesday 04 February 2026 00:54:53 +0000 (0:00:02.554) 0:02:17.231 **** 2026-02-04 00:55:29.962664 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962668 | orchestrator | 2026-02-04 00:55:29.962671 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 00:55:29.962675 | orchestrator | Wednesday 04 February 2026 00:55:10 +0000 (0:00:16.968) 0:02:34.200 **** 2026-02-04 00:55:29.962679 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962683 | orchestrator | 2026-02-04 00:55:29.962686 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 00:55:29.962690 | orchestrator | Wednesday 04 February 2026 00:55:11 +0000 (0:00:00.560) 0:02:34.760 **** 2026-02-04 00:55:29.962694 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962698 | orchestrator | 2026-02-04 00:55:29.962702 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-04 00:55:29.962706 | orchestrator | 2026-02-04 00:55:29.962710 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-04 00:55:29.962714 | orchestrator | Wednesday 04 February 2026 00:55:13 +0000 (0:00:02.708) 0:02:37.469 **** 2026-02-04 00:55:29.962718 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:55:29.962722 | orchestrator | 2026-02-04 00:55:29.962726 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-04 00:55:29.962729 | orchestrator | Wednesday 04 February 2026 00:55:14 +0000 (0:00:00.513) 0:02:37.982 **** 2026-02-04 00:55:29.962733 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962739 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962745 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962751 | orchestrator | 2026-02-04 00:55:29.962757 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-04 00:55:29.962763 | orchestrator | Wednesday 04 February 2026 00:55:16 +0000 (0:00:02.412) 0:02:40.395 **** 2026-02-04 00:55:29.962768 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962774 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962781 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962786 | orchestrator | 2026-02-04 00:55:29.962792 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-04 00:55:29.962797 | orchestrator | Wednesday 04 February 2026 00:55:19 +0000 (0:00:02.246) 0:02:42.641 **** 2026-02-04 00:55:29.962803 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962809 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962816 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962849 | orchestrator | 2026-02-04 00:55:29.962853 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-04 00:55:29.962857 | orchestrator | Wednesday 04 February 2026 00:55:21 +0000 (0:00:02.261) 0:02:44.903 **** 2026-02-04 00:55:29.962861 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962869 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962873 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:55:29.962877 | orchestrator | 2026-02-04 00:55:29.962883 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-04 00:55:29.962895 | orchestrator | Wednesday 04 February 2026 00:55:23 +0000 (0:00:02.301) 0:02:47.205 **** 2026-02-04 00:55:29.962901 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:55:29.962906 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:55:29.962912 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:55:29.962919 | orchestrator | 2026-02-04 00:55:29.962925 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-04 00:55:29.962931 | orchestrator | Wednesday 04 February 2026 00:55:26 +0000 (0:00:03.250) 0:02:50.455 **** 2026-02-04 00:55:29.962938 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:55:29.962945 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:55:29.962952 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:55:29.962960 | orchestrator | 2026-02-04 00:55:29.962964 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:55:29.962969 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-04 00:55:29.962973 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-04 00:55:29.962978 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-04 00:55:29.962982 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-04 00:55:29.962986 | orchestrator | 2026-02-04 00:55:29.962990 | orchestrator | 2026-02-04 00:55:29.962994 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:55:29.962998 | orchestrator | Wednesday 04 February 2026 00:55:27 +0000 (0:00:00.234) 0:02:50.689 **** 2026-02-04 00:55:29.963003 | orchestrator | =============================================================================== 2026-02-04 00:55:29.963008 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.18s 2026-02-04 00:55:29.963014 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 32.17s 2026-02-04 00:55:29.963025 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.97s 2026-02-04 00:55:29.963032 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.82s 2026-02-04 00:55:29.963038 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.09s 2026-02-04 00:55:29.963043 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 6.56s 2026-02-04 00:55:29.963049 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.93s 2026-02-04 00:55:29.963055 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.02s 2026-02-04 00:55:29.963062 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.52s 2026-02-04 00:55:29.963068 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.25s 2026-02-04 00:55:29.963074 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.18s 2026-02-04 00:55:29.963080 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.96s 2026-02-04 00:55:29.963086 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.82s 2026-02-04 00:55:29.963108 | orchestrator | Check MariaDB service --------------------------------------------------- 2.74s 2026-02-04 00:55:29.963115 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.71s 2026-02-04 00:55:29.963122 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.54s 2026-02-04 00:55:29.963128 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.48s 2026-02-04 00:55:29.963135 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.42s 2026-02-04 00:55:29.963141 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.41s 2026-02-04 00:55:29.963154 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.30s 2026-02-04 00:55:29.963161 | orchestrator | 2026-02-04 00:55:29 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:29.965589 | orchestrator | 2026-02-04 00:55:29 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:29.965654 | orchestrator | 2026-02-04 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:33.005397 | orchestrator | 2026-02-04 00:55:33 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:33.005497 | orchestrator | 2026-02-04 00:55:33 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:33.006689 | orchestrator | 2026-02-04 00:55:33 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:33.006740 | orchestrator | 2026-02-04 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:36.098263 | orchestrator | 2026-02-04 00:55:36 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:36.101832 | orchestrator | 2026-02-04 00:55:36 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:36.103156 | orchestrator | 2026-02-04 00:55:36 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:36.103204 | orchestrator | 2026-02-04 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:39.140554 | orchestrator | 2026-02-04 00:55:39 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:39.148939 | orchestrator | 2026-02-04 00:55:39 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:39.149326 | orchestrator | 2026-02-04 00:55:39 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:39.149708 | orchestrator | 2026-02-04 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:42.188486 | orchestrator | 2026-02-04 00:55:42 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:42.188874 | orchestrator | 2026-02-04 00:55:42 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:42.189825 | orchestrator | 2026-02-04 00:55:42 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:42.189847 | orchestrator | 2026-02-04 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:45.224973 | orchestrator | 2026-02-04 00:55:45 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:45.225684 | orchestrator | 2026-02-04 00:55:45 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:45.227161 | orchestrator | 2026-02-04 00:55:45 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:45.227217 | orchestrator | 2026-02-04 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:48.259214 | orchestrator | 2026-02-04 00:55:48 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:48.259349 | orchestrator | 2026-02-04 00:55:48 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:48.262208 | orchestrator | 2026-02-04 00:55:48 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:48.262263 | orchestrator | 2026-02-04 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:51.297227 | orchestrator | 2026-02-04 00:55:51 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:51.298005 | orchestrator | 2026-02-04 00:55:51 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:51.299772 | orchestrator | 2026-02-04 00:55:51 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:51.299828 | orchestrator | 2026-02-04 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:54.335479 | orchestrator | 2026-02-04 00:55:54 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:54.336900 | orchestrator | 2026-02-04 00:55:54 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:54.337716 | orchestrator | 2026-02-04 00:55:54 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:54.337744 | orchestrator | 2026-02-04 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:55:57.374933 | orchestrator | 2026-02-04 00:55:57 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:55:57.375019 | orchestrator | 2026-02-04 00:55:57 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:55:57.375026 | orchestrator | 2026-02-04 00:55:57 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:55:57.375031 | orchestrator | 2026-02-04 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:00.411778 | orchestrator | 2026-02-04 00:56:00 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:00.411909 | orchestrator | 2026-02-04 00:56:00 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:00.414525 | orchestrator | 2026-02-04 00:56:00 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:00.414590 | orchestrator | 2026-02-04 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:03.457799 | orchestrator | 2026-02-04 00:56:03 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:03.458571 | orchestrator | 2026-02-04 00:56:03 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:03.459453 | orchestrator | 2026-02-04 00:56:03 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:03.459505 | orchestrator | 2026-02-04 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:06.497705 | orchestrator | 2026-02-04 00:56:06 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:06.498854 | orchestrator | 2026-02-04 00:56:06 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:06.499704 | orchestrator | 2026-02-04 00:56:06 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:06.499738 | orchestrator | 2026-02-04 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:09.537394 | orchestrator | 2026-02-04 00:56:09 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:09.544338 | orchestrator | 2026-02-04 00:56:09 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:09.546002 | orchestrator | 2026-02-04 00:56:09 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:09.546124 | orchestrator | 2026-02-04 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:12.587156 | orchestrator | 2026-02-04 00:56:12 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:12.587641 | orchestrator | 2026-02-04 00:56:12 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:12.588779 | orchestrator | 2026-02-04 00:56:12 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:12.588843 | orchestrator | 2026-02-04 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:15.620710 | orchestrator | 2026-02-04 00:56:15 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:15.622250 | orchestrator | 2026-02-04 00:56:15 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:15.623867 | orchestrator | 2026-02-04 00:56:15 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:15.623990 | orchestrator | 2026-02-04 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:18.664933 | orchestrator | 2026-02-04 00:56:18 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:18.666298 | orchestrator | 2026-02-04 00:56:18 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:18.668160 | orchestrator | 2026-02-04 00:56:18 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:18.668204 | orchestrator | 2026-02-04 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:21.708512 | orchestrator | 2026-02-04 00:56:21 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:21.708607 | orchestrator | 2026-02-04 00:56:21 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:21.709152 | orchestrator | 2026-02-04 00:56:21 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:21.709310 | orchestrator | 2026-02-04 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:24.756485 | orchestrator | 2026-02-04 00:56:24 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:24.757350 | orchestrator | 2026-02-04 00:56:24 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:24.758701 | orchestrator | 2026-02-04 00:56:24 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:24.758748 | orchestrator | 2026-02-04 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:27.797478 | orchestrator | 2026-02-04 00:56:27 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:27.797694 | orchestrator | 2026-02-04 00:56:27 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:27.798870 | orchestrator | 2026-02-04 00:56:27 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:27.798951 | orchestrator | 2026-02-04 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:30.838655 | orchestrator | 2026-02-04 00:56:30 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:30.840414 | orchestrator | 2026-02-04 00:56:30 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:30.842104 | orchestrator | 2026-02-04 00:56:30 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:30.842143 | orchestrator | 2026-02-04 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:33.891421 | orchestrator | 2026-02-04 00:56:33 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:33.893160 | orchestrator | 2026-02-04 00:56:33 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:33.895213 | orchestrator | 2026-02-04 00:56:33 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:33.895263 | orchestrator | 2026-02-04 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:36.942886 | orchestrator | 2026-02-04 00:56:36 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:36.944166 | orchestrator | 2026-02-04 00:56:36 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:36.945602 | orchestrator | 2026-02-04 00:56:36 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:36.945793 | orchestrator | 2026-02-04 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:39.988516 | orchestrator | 2026-02-04 00:56:39 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:39.990709 | orchestrator | 2026-02-04 00:56:39 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:39.993219 | orchestrator | 2026-02-04 00:56:39 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:39.993248 | orchestrator | 2026-02-04 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:43.032262 | orchestrator | 2026-02-04 00:56:43 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:43.034540 | orchestrator | 2026-02-04 00:56:43 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:43.036675 | orchestrator | 2026-02-04 00:56:43 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:43.036720 | orchestrator | 2026-02-04 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:46.075543 | orchestrator | 2026-02-04 00:56:46 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:46.078164 | orchestrator | 2026-02-04 00:56:46 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:46.081400 | orchestrator | 2026-02-04 00:56:46 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:46.081448 | orchestrator | 2026-02-04 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:49.125413 | orchestrator | 2026-02-04 00:56:49 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:49.125468 | orchestrator | 2026-02-04 00:56:49 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:49.126605 | orchestrator | 2026-02-04 00:56:49 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:49.126683 | orchestrator | 2026-02-04 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:52.169845 | orchestrator | 2026-02-04 00:56:52 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:52.171749 | orchestrator | 2026-02-04 00:56:52 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:52.173383 | orchestrator | 2026-02-04 00:56:52 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:52.173423 | orchestrator | 2026-02-04 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:55.211081 | orchestrator | 2026-02-04 00:56:55 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:55.213717 | orchestrator | 2026-02-04 00:56:55 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:55.215982 | orchestrator | 2026-02-04 00:56:55 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:55.216065 | orchestrator | 2026-02-04 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:56:58.256245 | orchestrator | 2026-02-04 00:56:58 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:56:58.257516 | orchestrator | 2026-02-04 00:56:58 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:56:58.259068 | orchestrator | 2026-02-04 00:56:58 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:56:58.259116 | orchestrator | 2026-02-04 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:01.294431 | orchestrator | 2026-02-04 00:57:01 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state STARTED 2026-02-04 00:57:01.294879 | orchestrator | 2026-02-04 00:57:01 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:01.296043 | orchestrator | 2026-02-04 00:57:01 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:01.296224 | orchestrator | 2026-02-04 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:04.336285 | orchestrator | 2026-02-04 00:57:04 | INFO  | Task dc02d23d-0524-4540-9a14-b12bc7bcd6b9 is in state SUCCESS 2026-02-04 00:57:04.337999 | orchestrator | 2026-02-04 00:57:04.338116 | orchestrator | 2026-02-04 00:57:04.338127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:57:04.338136 | orchestrator | 2026-02-04 00:57:04.338142 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:57:04.338149 | orchestrator | Wednesday 04 February 2026 00:55:31 +0000 (0:00:00.280) 0:00:00.280 **** 2026-02-04 00:57:04.338155 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.338162 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.338169 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.338176 | orchestrator | 2026-02-04 00:57:04.338183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:57:04.338190 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.337) 0:00:00.617 **** 2026-02-04 00:57:04.338197 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-04 00:57:04.338204 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-04 00:57:04.338211 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-04 00:57:04.338399 | orchestrator | 2026-02-04 00:57:04.338415 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-04 00:57:04.338421 | orchestrator | 2026-02-04 00:57:04.338428 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 00:57:04.338435 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.426) 0:00:01.044 **** 2026-02-04 00:57:04.338442 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:57:04.338450 | orchestrator | 2026-02-04 00:57:04.338469 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-04 00:57:04.338483 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:00.510) 0:00:01.554 **** 2026-02-04 00:57:04.338510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.338558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.338572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.338585 | orchestrator | 2026-02-04 00:57:04.338591 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-04 00:57:04.338597 | orchestrator | Wednesday 04 February 2026 00:55:34 +0000 (0:00:01.572) 0:00:03.126 **** 2026-02-04 00:57:04.338603 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.338610 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.338616 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.338623 | orchestrator | 2026-02-04 00:57:04.338629 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 00:57:04.338635 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.429) 0:00:03.555 **** 2026-02-04 00:57:04.338648 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 00:57:04.338655 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 00:57:04.338661 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 00:57:04.338668 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 00:57:04.338674 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 00:57:04.338680 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 00:57:04.338687 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-04 00:57:04.338693 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 00:57:04.338699 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 00:57:04.338705 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 00:57:04.338711 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 00:57:04.338717 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 00:57:04.338723 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 00:57:04.338729 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 00:57:04.338740 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-04 00:57:04.338746 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 00:57:04.338752 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 00:57:04.338758 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 00:57:04.338764 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 00:57:04.338770 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 00:57:04.338776 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 00:57:04.338782 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 00:57:04.338786 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-04 00:57:04.338790 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 00:57:04.338795 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-04 00:57:04.338800 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-04 00:57:04.338805 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-04 00:57:04.338809 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-04 00:57:04.338813 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-04 00:57:04.338817 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-04 00:57:04.338872 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-04 00:57:04.338878 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-04 00:57:04.338882 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-04 00:57:04.338888 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-04 00:57:04.338891 | orchestrator | 2026-02-04 00:57:04.338895 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.338899 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.706) 0:00:04.262 **** 2026-02-04 00:57:04.338903 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.338907 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.338911 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.338915 | orchestrator | 2026-02-04 00:57:04.338919 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.338923 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.328) 0:00:04.591 **** 2026-02-04 00:57:04.338930 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339005 | orchestrator | 2026-02-04 00:57:04.339013 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339019 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.115) 0:00:04.707 **** 2026-02-04 00:57:04.339038 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339044 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339050 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339056 | orchestrator | 2026-02-04 00:57:04.339063 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339068 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.442) 0:00:05.149 **** 2026-02-04 00:57:04.339074 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339080 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339087 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339093 | orchestrator | 2026-02-04 00:57:04.339099 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339106 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:00.289) 0:00:05.439 **** 2026-02-04 00:57:04.339112 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339119 | orchestrator | 2026-02-04 00:57:04.339125 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339132 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:00.139) 0:00:05.578 **** 2026-02-04 00:57:04.339138 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339144 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339150 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339157 | orchestrator | 2026-02-04 00:57:04.339162 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339168 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:00.383) 0:00:05.962 **** 2026-02-04 00:57:04.339175 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339181 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339188 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339194 | orchestrator | 2026-02-04 00:57:04.339201 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339207 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:00.354) 0:00:06.317 **** 2026-02-04 00:57:04.339214 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339221 | orchestrator | 2026-02-04 00:57:04.339226 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339233 | orchestrator | Wednesday 04 February 2026 00:55:38 +0000 (0:00:00.359) 0:00:06.677 **** 2026-02-04 00:57:04.339240 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339244 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339248 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339252 | orchestrator | 2026-02-04 00:57:04.339256 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339260 | orchestrator | Wednesday 04 February 2026 00:55:38 +0000 (0:00:00.296) 0:00:06.974 **** 2026-02-04 00:57:04.339264 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339268 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339272 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339276 | orchestrator | 2026-02-04 00:57:04.339279 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339283 | orchestrator | Wednesday 04 February 2026 00:55:38 +0000 (0:00:00.313) 0:00:07.287 **** 2026-02-04 00:57:04.339287 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339291 | orchestrator | 2026-02-04 00:57:04.339295 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339298 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.149) 0:00:07.437 **** 2026-02-04 00:57:04.339302 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339306 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339310 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339314 | orchestrator | 2026-02-04 00:57:04.339318 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339322 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.279) 0:00:07.716 **** 2026-02-04 00:57:04.339330 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339334 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339338 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339342 | orchestrator | 2026-02-04 00:57:04.339346 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339349 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.467) 0:00:08.184 **** 2026-02-04 00:57:04.339353 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339357 | orchestrator | 2026-02-04 00:57:04.339361 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339370 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.126) 0:00:08.311 **** 2026-02-04 00:57:04.339374 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339378 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339382 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339386 | orchestrator | 2026-02-04 00:57:04.339390 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339394 | orchestrator | Wednesday 04 February 2026 00:55:40 +0000 (0:00:00.290) 0:00:08.602 **** 2026-02-04 00:57:04.339398 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339402 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339406 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339409 | orchestrator | 2026-02-04 00:57:04.339413 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339417 | orchestrator | Wednesday 04 February 2026 00:55:40 +0000 (0:00:00.316) 0:00:08.919 **** 2026-02-04 00:57:04.339421 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339425 | orchestrator | 2026-02-04 00:57:04.339429 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339433 | orchestrator | Wednesday 04 February 2026 00:55:40 +0000 (0:00:00.116) 0:00:09.035 **** 2026-02-04 00:57:04.339437 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339440 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339448 | orchestrator | 2026-02-04 00:57:04.339452 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339461 | orchestrator | Wednesday 04 February 2026 00:55:40 +0000 (0:00:00.266) 0:00:09.302 **** 2026-02-04 00:57:04.339465 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339469 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339473 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339477 | orchestrator | 2026-02-04 00:57:04.339481 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339485 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:00.501) 0:00:09.803 **** 2026-02-04 00:57:04.339488 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339492 | orchestrator | 2026-02-04 00:57:04.339496 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339500 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:00.253) 0:00:10.057 **** 2026-02-04 00:57:04.339504 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339508 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339512 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339515 | orchestrator | 2026-02-04 00:57:04.339519 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339523 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:00.363) 0:00:10.421 **** 2026-02-04 00:57:04.339527 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339531 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339535 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339538 | orchestrator | 2026-02-04 00:57:04.339542 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339546 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:00.357) 0:00:10.778 **** 2026-02-04 00:57:04.339550 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339557 | orchestrator | 2026-02-04 00:57:04.339561 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339565 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:00.138) 0:00:10.917 **** 2026-02-04 00:57:04.339569 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339573 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339576 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339580 | orchestrator | 2026-02-04 00:57:04.339584 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339588 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:00.477) 0:00:11.394 **** 2026-02-04 00:57:04.339592 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339596 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339600 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339604 | orchestrator | 2026-02-04 00:57:04.339607 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339612 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:00.388) 0:00:11.783 **** 2026-02-04 00:57:04.339616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339619 | orchestrator | 2026-02-04 00:57:04.339623 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339627 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:00.156) 0:00:11.939 **** 2026-02-04 00:57:04.339631 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339635 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339638 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339642 | orchestrator | 2026-02-04 00:57:04.339646 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 00:57:04.339650 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:00.277) 0:00:12.217 **** 2026-02-04 00:57:04.339654 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:57:04.339658 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:57:04.339661 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:57:04.339665 | orchestrator | 2026-02-04 00:57:04.339669 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 00:57:04.339673 | orchestrator | Wednesday 04 February 2026 00:55:44 +0000 (0:00:00.322) 0:00:12.539 **** 2026-02-04 00:57:04.339677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339681 | orchestrator | 2026-02-04 00:57:04.339684 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 00:57:04.339688 | orchestrator | Wednesday 04 February 2026 00:55:44 +0000 (0:00:00.130) 0:00:12.670 **** 2026-02-04 00:57:04.339692 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339696 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339700 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339703 | orchestrator | 2026-02-04 00:57:04.339707 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-04 00:57:04.339711 | orchestrator | Wednesday 04 February 2026 00:55:44 +0000 (0:00:00.467) 0:00:13.137 **** 2026-02-04 00:57:04.339715 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:57:04.339719 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:57:04.339723 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:57:04.339727 | orchestrator | 2026-02-04 00:57:04.339734 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-04 00:57:04.339737 | orchestrator | Wednesday 04 February 2026 00:55:46 +0000 (0:00:01.679) 0:00:14.817 **** 2026-02-04 00:57:04.339741 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 00:57:04.339745 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 00:57:04.339749 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 00:57:04.339753 | orchestrator | 2026-02-04 00:57:04.339757 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-04 00:57:04.339764 | orchestrator | Wednesday 04 February 2026 00:55:48 +0000 (0:00:01.833) 0:00:16.651 **** 2026-02-04 00:57:04.339768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 00:57:04.339773 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 00:57:04.339777 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 00:57:04.339781 | orchestrator | 2026-02-04 00:57:04.339787 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-04 00:57:04.339791 | orchestrator | Wednesday 04 February 2026 00:55:50 +0000 (0:00:02.154) 0:00:18.805 **** 2026-02-04 00:57:04.339795 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 00:57:04.339799 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 00:57:04.339803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 00:57:04.339807 | orchestrator | 2026-02-04 00:57:04.339811 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-04 00:57:04.339814 | orchestrator | Wednesday 04 February 2026 00:55:52 +0000 (0:00:01.962) 0:00:20.767 **** 2026-02-04 00:57:04.339818 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339822 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339826 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339830 | orchestrator | 2026-02-04 00:57:04.339834 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-04 00:57:04.339837 | orchestrator | Wednesday 04 February 2026 00:55:52 +0000 (0:00:00.297) 0:00:21.065 **** 2026-02-04 00:57:04.339841 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339845 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.339849 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.339853 | orchestrator | 2026-02-04 00:57:04.339857 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 00:57:04.339861 | orchestrator | Wednesday 04 February 2026 00:55:52 +0000 (0:00:00.277) 0:00:21.342 **** 2026-02-04 00:57:04.339865 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:57:04.339868 | orchestrator | 2026-02-04 00:57:04.339872 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-04 00:57:04.339876 | orchestrator | Wednesday 04 February 2026 00:55:53 +0000 (0:00:00.764) 0:00:22.106 **** 2026-02-04 00:57:04.339886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.339901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.339909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.339917 | orchestrator | 2026-02-04 00:57:04.339921 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-04 00:57:04.339925 | orchestrator | Wednesday 04 February 2026 00:55:55 +0000 (0:00:01.555) 0:00:23.662 **** 2026-02-04 00:57:04.339933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:57:04.339971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:57:04.339985 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.339991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:57:04.339998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.340005 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.340133 | orchestrator | 2026-02-04 00:57:04.340144 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-04 00:57:04.340151 | orchestrator | Wednesday 04 February 2026 00:55:55 +0000 (0:00:00.689) 0:00:24.352 **** 2026-02-04 00:57:04.340171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:57:04.340179 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.340185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:57:04.340198 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.340218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 00:57:04.340226 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.340232 | orchestrator | 2026-02-04 00:57:04.340237 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-04 00:57:04.340243 | orchestrator | Wednesday 04 February 2026 00:55:56 +0000 (0:00:00.833) 0:00:25.185 **** 2026-02-04 00:57:04.340252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.340270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.340281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 00:57:04.340293 | orchestrator | 2026-02-04 00:57:04.340299 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 00:57:04.340305 | orchestrator | Wednesday 04 February 2026 00:55:58 +0000 (0:00:01.459) 0:00:26.644 **** 2026-02-04 00:57:04.340311 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:57:04.340317 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:57:04.340323 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:57:04.340328 | orchestrator | 2026-02-04 00:57:04.340334 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 00:57:04.340343 | orchestrator | Wednesday 04 February 2026 00:55:58 +0000 (0:00:00.285) 0:00:26.930 **** 2026-02-04 00:57:04.340350 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:57:04.340356 | orchestrator | 2026-02-04 00:57:04.340363 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-04 00:57:04.340369 | orchestrator | Wednesday 04 February 2026 00:55:59 +0000 (0:00:00.508) 0:00:27.438 **** 2026-02-04 00:57:04.340375 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:57:04.340382 | orchestrator | 2026-02-04 00:57:04.340388 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-04 00:57:04.340394 | orchestrator | Wednesday 04 February 2026 00:56:01 +0000 (0:00:02.633) 0:00:30.072 **** 2026-02-04 00:57:04.340399 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:57:04.340405 | orchestrator | 2026-02-04 00:57:04.340411 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-04 00:57:04.340417 | orchestrator | Wednesday 04 February 2026 00:56:04 +0000 (0:00:02.758) 0:00:32.831 **** 2026-02-04 00:57:04.340423 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:57:04.340429 | orchestrator | 2026-02-04 00:57:04.340435 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 00:57:04.340441 | orchestrator | Wednesday 04 February 2026 00:56:21 +0000 (0:00:16.752) 0:00:49.583 **** 2026-02-04 00:57:04.340447 | orchestrator | 2026-02-04 00:57:04.340453 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 00:57:04.340459 | orchestrator | Wednesday 04 February 2026 00:56:21 +0000 (0:00:00.064) 0:00:49.648 **** 2026-02-04 00:57:04.340465 | orchestrator | 2026-02-04 00:57:04.340471 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 00:57:04.340478 | orchestrator | Wednesday 04 February 2026 00:56:21 +0000 (0:00:00.065) 0:00:49.714 **** 2026-02-04 00:57:04.340489 | orchestrator | 2026-02-04 00:57:04.340496 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-04 00:57:04.340503 | orchestrator | Wednesday 04 February 2026 00:56:21 +0000 (0:00:00.063) 0:00:49.778 **** 2026-02-04 00:57:04.340509 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:57:04.340515 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:57:04.340521 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:57:04.340526 | orchestrator | 2026-02-04 00:57:04.340532 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:57:04.340539 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 00:57:04.340546 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-04 00:57:04.340553 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-04 00:57:04.340559 | orchestrator | 2026-02-04 00:57:04.340566 | orchestrator | 2026-02-04 00:57:04.340572 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:57:04.340578 | orchestrator | Wednesday 04 February 2026 00:57:01 +0000 (0:00:39.973) 0:01:29.751 **** 2026-02-04 00:57:04.340585 | orchestrator | =============================================================================== 2026-02-04 00:57:04.340591 | orchestrator | horizon : Restart horizon container ------------------------------------ 39.97s 2026-02-04 00:57:04.340597 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.75s 2026-02-04 00:57:04.340604 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.76s 2026-02-04 00:57:04.340610 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.63s 2026-02-04 00:57:04.340616 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.15s 2026-02-04 00:57:04.340622 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.96s 2026-02-04 00:57:04.340629 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.83s 2026-02-04 00:57:04.340635 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.68s 2026-02-04 00:57:04.340641 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.57s 2026-02-04 00:57:04.340647 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.56s 2026-02-04 00:57:04.340658 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.46s 2026-02-04 00:57:04.340662 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2026-02-04 00:57:04.340666 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-02-04 00:57:04.340670 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2026-02-04 00:57:04.340674 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.69s 2026-02-04 00:57:04.340677 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-02-04 00:57:04.340681 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-02-04 00:57:04.340685 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-02-04 00:57:04.340689 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.48s 2026-02-04 00:57:04.340693 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2026-02-04 00:57:04.340697 | orchestrator | 2026-02-04 00:57:04 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:04.340705 | orchestrator | 2026-02-04 00:57:04 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:04.340714 | orchestrator | 2026-02-04 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:07.384407 | orchestrator | 2026-02-04 00:57:07 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:07.386500 | orchestrator | 2026-02-04 00:57:07 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:07.386648 | orchestrator | 2026-02-04 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:10.431564 | orchestrator | 2026-02-04 00:57:10 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:10.431990 | orchestrator | 2026-02-04 00:57:10 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:10.432010 | orchestrator | 2026-02-04 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:13.476180 | orchestrator | 2026-02-04 00:57:13 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:13.480058 | orchestrator | 2026-02-04 00:57:13 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:13.480884 | orchestrator | 2026-02-04 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:16.514519 | orchestrator | 2026-02-04 00:57:16 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:16.516195 | orchestrator | 2026-02-04 00:57:16 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:16.516231 | orchestrator | 2026-02-04 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:19.557342 | orchestrator | 2026-02-04 00:57:19 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:19.559734 | orchestrator | 2026-02-04 00:57:19 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:19.559807 | orchestrator | 2026-02-04 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:22.602340 | orchestrator | 2026-02-04 00:57:22 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:22.603949 | orchestrator | 2026-02-04 00:57:22 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:22.603990 | orchestrator | 2026-02-04 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:25.639770 | orchestrator | 2026-02-04 00:57:25 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state STARTED 2026-02-04 00:57:25.641347 | orchestrator | 2026-02-04 00:57:25 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:25.641399 | orchestrator | 2026-02-04 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:28.689713 | orchestrator | 2026-02-04 00:57:28 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:28.693716 | orchestrator | 2026-02-04 00:57:28 | INFO  | Task 979e1b8e-764b-47a3-bde3-86a99b656921 is in state SUCCESS 2026-02-04 00:57:28.695291 | orchestrator | 2026-02-04 00:57:28.695341 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 00:57:28.695351 | orchestrator | 2.16.14 2026-02-04 00:57:28.695359 | orchestrator | 2026-02-04 00:57:28.695365 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-04 00:57:28.695372 | orchestrator | 2026-02-04 00:57:28.695379 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 00:57:28.695385 | orchestrator | Wednesday 04 February 2026 00:55:16 +0000 (0:00:00.563) 0:00:00.564 **** 2026-02-04 00:57:28.695401 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:57:28.695422 | orchestrator | 2026-02-04 00:57:28.695557 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 00:57:28.695568 | orchestrator | Wednesday 04 February 2026 00:55:17 +0000 (0:00:00.598) 0:00:01.162 **** 2026-02-04 00:57:28.695574 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.695791 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.695806 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.695816 | orchestrator | 2026-02-04 00:57:28.695827 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 00:57:28.695839 | orchestrator | Wednesday 04 February 2026 00:55:17 +0000 (0:00:00.627) 0:00:01.790 **** 2026-02-04 00:57:28.695851 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.695862 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.695872 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.695878 | orchestrator | 2026-02-04 00:57:28.695885 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 00:57:28.695891 | orchestrator | Wednesday 04 February 2026 00:55:18 +0000 (0:00:00.315) 0:00:02.105 **** 2026-02-04 00:57:28.695994 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696009 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696020 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696029 | orchestrator | 2026-02-04 00:57:28.696038 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 00:57:28.696048 | orchestrator | Wednesday 04 February 2026 00:55:19 +0000 (0:00:00.836) 0:00:02.942 **** 2026-02-04 00:57:28.696058 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696069 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696080 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696091 | orchestrator | 2026-02-04 00:57:28.696102 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 00:57:28.696288 | orchestrator | Wednesday 04 February 2026 00:55:19 +0000 (0:00:00.299) 0:00:03.242 **** 2026-02-04 00:57:28.696294 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696300 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696307 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696313 | orchestrator | 2026-02-04 00:57:28.696319 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 00:57:28.696326 | orchestrator | Wednesday 04 February 2026 00:55:19 +0000 (0:00:00.301) 0:00:03.543 **** 2026-02-04 00:57:28.696332 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696338 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696344 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696351 | orchestrator | 2026-02-04 00:57:28.696357 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 00:57:28.696363 | orchestrator | Wednesday 04 February 2026 00:55:20 +0000 (0:00:00.324) 0:00:03.868 **** 2026-02-04 00:57:28.696369 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.696376 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.696382 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.696389 | orchestrator | 2026-02-04 00:57:28.696395 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 00:57:28.696401 | orchestrator | Wednesday 04 February 2026 00:55:20 +0000 (0:00:00.449) 0:00:04.317 **** 2026-02-04 00:57:28.696407 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696414 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696420 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696426 | orchestrator | 2026-02-04 00:57:28.696432 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 00:57:28.696438 | orchestrator | Wednesday 04 February 2026 00:55:20 +0000 (0:00:00.290) 0:00:04.607 **** 2026-02-04 00:57:28.696445 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:57:28.696451 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:57:28.696457 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:57:28.696473 | orchestrator | 2026-02-04 00:57:28.696480 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 00:57:28.696486 | orchestrator | Wednesday 04 February 2026 00:55:21 +0000 (0:00:00.615) 0:00:05.223 **** 2026-02-04 00:57:28.696493 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696499 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696505 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696514 | orchestrator | 2026-02-04 00:57:28.696524 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 00:57:28.696533 | orchestrator | Wednesday 04 February 2026 00:55:21 +0000 (0:00:00.430) 0:00:05.654 **** 2026-02-04 00:57:28.696548 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:57:28.696559 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:57:28.696569 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:57:28.696579 | orchestrator | 2026-02-04 00:57:28.696589 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 00:57:28.696599 | orchestrator | Wednesday 04 February 2026 00:55:23 +0000 (0:00:02.148) 0:00:07.803 **** 2026-02-04 00:57:28.696610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 00:57:28.696636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 00:57:28.696657 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 00:57:28.696668 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.696678 | orchestrator | 2026-02-04 00:57:28.696723 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 00:57:28.696731 | orchestrator | Wednesday 04 February 2026 00:55:24 +0000 (0:00:00.688) 0:00:08.492 **** 2026-02-04 00:57:28.696745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.696754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.696761 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.696767 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.696774 | orchestrator | 2026-02-04 00:57:28.696780 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 00:57:28.696786 | orchestrator | Wednesday 04 February 2026 00:55:25 +0000 (0:00:00.826) 0:00:09.318 **** 2026-02-04 00:57:28.696794 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.696802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.696808 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.696821 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.696828 | orchestrator | 2026-02-04 00:57:28.696834 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 00:57:28.696840 | orchestrator | Wednesday 04 February 2026 00:55:25 +0000 (0:00:00.307) 0:00:09.626 **** 2026-02-04 00:57:28.696848 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'ba87923d1d52', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 00:55:22.481448', 'end': '2026-02-04 00:55:22.536919', 'delta': '0:00:00.055471', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['ba87923d1d52'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 00:57:28.696856 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6d77124edd75', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 00:55:23.222303', 'end': '2026-02-04 00:55:23.269705', 'delta': '0:00:00.047402', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6d77124edd75'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 00:57:28.696886 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a29428ae7f58', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 00:55:23.785480', 'end': '2026-02-04 00:55:23.835567', 'delta': '0:00:00.050087', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a29428ae7f58'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 00:57:28.696894 | orchestrator | 2026-02-04 00:57:28.696942 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 00:57:28.696951 | orchestrator | Wednesday 04 February 2026 00:55:25 +0000 (0:00:00.190) 0:00:09.816 **** 2026-02-04 00:57:28.696959 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.696967 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.696974 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.696981 | orchestrator | 2026-02-04 00:57:28.696989 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 00:57:28.696996 | orchestrator | Wednesday 04 February 2026 00:55:26 +0000 (0:00:00.439) 0:00:10.256 **** 2026-02-04 00:57:28.697003 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-04 00:57:28.697011 | orchestrator | 2026-02-04 00:57:28.697018 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 00:57:28.697025 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:01.752) 0:00:12.008 **** 2026-02-04 00:57:28.697033 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697041 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697054 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697061 | orchestrator | 2026-02-04 00:57:28.697068 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 00:57:28.697076 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:00.293) 0:00:12.302 **** 2026-02-04 00:57:28.697083 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697090 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697101 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697112 | orchestrator | 2026-02-04 00:57:28.697129 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 00:57:28.697140 | orchestrator | Wednesday 04 February 2026 00:55:28 +0000 (0:00:00.438) 0:00:12.741 **** 2026-02-04 00:57:28.697150 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697160 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697170 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697180 | orchestrator | 2026-02-04 00:57:28.697191 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 00:57:28.697201 | orchestrator | Wednesday 04 February 2026 00:55:29 +0000 (0:00:00.461) 0:00:13.203 **** 2026-02-04 00:57:28.697212 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.697223 | orchestrator | 2026-02-04 00:57:28.697234 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 00:57:28.697245 | orchestrator | Wednesday 04 February 2026 00:55:29 +0000 (0:00:00.133) 0:00:13.336 **** 2026-02-04 00:57:28.697255 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697266 | orchestrator | 2026-02-04 00:57:28.697273 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 00:57:28.697279 | orchestrator | Wednesday 04 February 2026 00:55:29 +0000 (0:00:00.232) 0:00:13.569 **** 2026-02-04 00:57:28.697285 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697292 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697298 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697304 | orchestrator | 2026-02-04 00:57:28.697310 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 00:57:28.697317 | orchestrator | Wednesday 04 February 2026 00:55:30 +0000 (0:00:00.308) 0:00:13.878 **** 2026-02-04 00:57:28.697323 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697330 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697336 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697342 | orchestrator | 2026-02-04 00:57:28.697348 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 00:57:28.697355 | orchestrator | Wednesday 04 February 2026 00:55:30 +0000 (0:00:00.392) 0:00:14.270 **** 2026-02-04 00:57:28.697361 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697367 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697373 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697380 | orchestrator | 2026-02-04 00:57:28.697386 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 00:57:28.697392 | orchestrator | Wednesday 04 February 2026 00:55:31 +0000 (0:00:00.610) 0:00:14.880 **** 2026-02-04 00:57:28.697398 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697404 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697411 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697417 | orchestrator | 2026-02-04 00:57:28.697423 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 00:57:28.697429 | orchestrator | Wednesday 04 February 2026 00:55:31 +0000 (0:00:00.362) 0:00:15.243 **** 2026-02-04 00:57:28.697436 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697442 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697448 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697454 | orchestrator | 2026-02-04 00:57:28.697460 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 00:57:28.697467 | orchestrator | Wednesday 04 February 2026 00:55:31 +0000 (0:00:00.305) 0:00:15.549 **** 2026-02-04 00:57:28.697479 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697485 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697491 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697525 | orchestrator | 2026-02-04 00:57:28.697532 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 00:57:28.697538 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.303) 0:00:15.852 **** 2026-02-04 00:57:28.697545 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697551 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.697557 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.697564 | orchestrator | 2026-02-04 00:57:28.697570 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 00:57:28.697580 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.520) 0:00:16.373 **** 2026-02-04 00:57:28.697588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4', 'dm-uuid-LVM-futtSpiu2Dc6zeEwlRIqGKxk2240GEq2NDItB2Yekp0j5JSGwBE6yhTovNBjHOIV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031', 'dm-uuid-LVM-auzvDlBNDf4L39V45seqETFBTe0hlfpeBqlD6kkCsuNwd2IcE42BuaoOpC4zPjAE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697630 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.697694 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5yak5L-o4at-xQ2L-P6UC-hZvx-2Sm1-YoLKVV', 'scsi-0QEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8', 'scsi-SQEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.697725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J1Zsmh-e107-W6nI-zKJc-WW2R-CulX-Lhjb6v', 'scsi-0QEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4', 'scsi-SQEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.697736 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81', 'scsi-SQEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.697751 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7', 'dm-uuid-LVM-NTd3wVqFaLZs0HHLMiyjtJ62L05RnYUwQ92nicsvk9XmhXeB6EY8l1ES0A9vlzPg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.697777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd', 'dm-uuid-LVM-7ccqDb2IMlGvbROgddrBNTB0o1Up1e8jKjBNkdmhEIPN7p0IyTvtaslg9ZIPMZAL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697806 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697883 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.697894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697926 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.697968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.697992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FO9Ylb-oyya-bgzx-QKlN-HkEC-gQ2h-HRhlzY', 'scsi-0QEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74', 'scsi-SQEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bq1yaU-lh82-MUro-hneI-alZs-sfZu-Db2wDT', 'scsi-0QEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a', 'scsi-SQEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698060 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde', 'scsi-SQEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698079 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.698086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9', 'dm-uuid-LVM-6y4w4ArVi5D1tyooWsj9aIJCekc2S7nLhYC0RCkddpwlSQuyk6aIosyXEoEqPeQY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df', 'dm-uuid-LVM-3afIFmYJiFa9RqNchm5P6Eeh4oUATUr9E8CsbzdSyxtBuOKMUchzqSt7IIBMTCOu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698123 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698130 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 00:57:28.698175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xqFisJ-bzmJ-mbhN-Vi30-8HQb-PT81-9BRMzc', 'scsi-0QEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e', 'scsi-SQEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niF0yj-rLoA-623U-KMw1-I2na-LHzi-DZgykD', 'scsi-0QEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a', 'scsi-SQEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59', 'scsi-SQEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 00:57:28.698218 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.698225 | orchestrator | 2026-02-04 00:57:28.698231 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 00:57:28.698243 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:00.582) 0:00:16.955 **** 2026-02-04 00:57:28.698258 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4', 'dm-uuid-LVM-futtSpiu2Dc6zeEwlRIqGKxk2240GEq2NDItB2Yekp0j5JSGwBE6yhTovNBjHOIV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698274 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031', 'dm-uuid-LVM-auzvDlBNDf4L39V45seqETFBTe0hlfpeBqlD6kkCsuNwd2IcE42BuaoOpC4zPjAE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698291 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698312 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698383 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7', 'dm-uuid-LVM-NTd3wVqFaLZs0HHLMiyjtJ62L05RnYUwQ92nicsvk9XmhXeB6EY8l1ES0A9vlzPg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698409 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd', 'dm-uuid-LVM-7ccqDb2IMlGvbROgddrBNTB0o1Up1e8jKjBNkdmhEIPN7p0IyTvtaslg9ZIPMZAL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698425 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f0ca4d6-f7ab-4710-bff1-30f9d3f6a016-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698441 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698457 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--29c6bc8c--f904--55ca--809f--6429b65a49e4-osd--block--29c6bc8c--f904--55ca--809f--6429b65a49e4'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5yak5L-o4at-xQ2L-P6UC-hZvx-2Sm1-YoLKVV', 'scsi-0QEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8', 'scsi-SQEMU_QEMU_HARDDISK_1679d905-c182-4dcb-a16f-ff388fb87fa8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698471 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698482 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1b7fb365--e96c--53e1--a018--1a0a8a845031-osd--block--1b7fb365--e96c--53e1--a018--1a0a8a845031'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J1Zsmh-e107-W6nI-zKJc-WW2R-CulX-Lhjb6v', 'scsi-0QEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4', 'scsi-SQEMU_QEMU_HARDDISK_6b00b999-8e8e-4579-a93c-a7b8030012f4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9', 'dm-uuid-LVM-6y4w4ArVi5D1tyooWsj9aIJCekc2S7nLhYC0RCkddpwlSQuyk6aIosyXEoEqPeQY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81', 'scsi-SQEMU_QEMU_HARDDISK_f279b9c8-b4a1-41c6-b00f-bd5a2c0b4c81'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698523 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df', 'dm-uuid-LVM-3afIFmYJiFa9RqNchm5P6Eeh4oUATUr9E8CsbzdSyxtBuOKMUchzqSt7IIBMTCOu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698555 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698566 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698639 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698651 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.698662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698674 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698716 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698722 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698744 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16', 'scsi-SQEMU_QEMU_HARDDISK_2836c5f1-c587-4b9a-8d47-e8c2679ad004-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698756 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7-osd--block--6fbd78c3--b583--5fde--80ba--0c2cdf325dc7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FO9Ylb-oyya-bgzx-QKlN-HkEC-gQ2h-HRhlzY', 'scsi-0QEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74', 'scsi-SQEMU_QEMU_HARDDISK_b014772c-38b5-4caa-9603-223bc8ef3a74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd-osd--block--c6467dc2--49cb--511a--ae45--cb6bd8ce65cd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Bq1yaU-lh82-MUro-hneI-alZs-sfZu-Db2wDT', 'scsi-0QEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a', 'scsi-SQEMU_QEMU_HARDDISK_70272979-0540-4b40-8ef0-41f73c6a4a5a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde', 'scsi-SQEMU_QEMU_HARDDISK_5b592fbb-955b-4fdf-b12f-717d86698fde'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16', 'scsi-SQEMU_QEMU_HARDDISK_ac95cfef-965b-47de-974f-7b957b3140f3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-27-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698833 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--81b3d681--fa24--5b92--b5b8--11e84f5b22d9-osd--block--81b3d681--fa24--5b92--b5b8--11e84f5b22d9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xqFisJ-bzmJ-mbhN-Vi30-8HQb-PT81-9BRMzc', 'scsi-0QEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e', 'scsi-SQEMU_QEMU_HARDDISK_330cb526-2149-4826-b513-02c8e88ca89e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698839 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.698846 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5659fb6c--b6d6--5368--9f3c--0e525a1333df-osd--block--5659fb6c--b6d6--5368--9f3c--0e525a1333df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-niF0yj-rLoA-623U-KMw1-I2na-LHzi-DZgykD', 'scsi-0QEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a', 'scsi-SQEMU_QEMU_HARDDISK_e6547550-6f0e-4316-b715-af657c75c64a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698853 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59', 'scsi-SQEMU_QEMU_HARDDISK_6b2cce40-d718-4f99-a243-3b703c717e59'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698865 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 00:57:28.698872 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.698879 | orchestrator | 2026-02-04 00:57:28.698885 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 00:57:28.698892 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:00.654) 0:00:17.610 **** 2026-02-04 00:57:28.698898 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.698932 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.698943 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.698954 | orchestrator | 2026-02-04 00:57:28.698965 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 00:57:28.698975 | orchestrator | Wednesday 04 February 2026 00:55:34 +0000 (0:00:00.811) 0:00:18.421 **** 2026-02-04 00:57:28.698983 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.698990 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.698996 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.699002 | orchestrator | 2026-02-04 00:57:28.699009 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 00:57:28.699015 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.490) 0:00:18.912 **** 2026-02-04 00:57:28.699022 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.699032 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.699048 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.699059 | orchestrator | 2026-02-04 00:57:28.699070 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 00:57:28.699080 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.683) 0:00:19.595 **** 2026-02-04 00:57:28.699089 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699099 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699108 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699118 | orchestrator | 2026-02-04 00:57:28.699127 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 00:57:28.699138 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.285) 0:00:19.880 **** 2026-02-04 00:57:28.699148 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699157 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699167 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699176 | orchestrator | 2026-02-04 00:57:28.699186 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 00:57:28.699196 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.430) 0:00:20.311 **** 2026-02-04 00:57:28.699205 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699215 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699225 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699235 | orchestrator | 2026-02-04 00:57:28.699246 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 00:57:28.699256 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.488) 0:00:20.800 **** 2026-02-04 00:57:28.699267 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 00:57:28.699279 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 00:57:28.699289 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 00:57:28.699300 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 00:57:28.699310 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 00:57:28.699322 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 00:57:28.699333 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 00:57:28.699389 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 00:57:28.699399 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 00:57:28.699405 | orchestrator | 2026-02-04 00:57:28.699412 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 00:57:28.699418 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:01.001) 0:00:21.802 **** 2026-02-04 00:57:28.699424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 00:57:28.699431 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 00:57:28.699437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 00:57:28.699443 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699449 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 00:57:28.699462 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 00:57:28.699469 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 00:57:28.699475 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699481 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 00:57:28.699487 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 00:57:28.699494 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 00:57:28.699505 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699515 | orchestrator | 2026-02-04 00:57:28.699525 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 00:57:28.699536 | orchestrator | Wednesday 04 February 2026 00:55:38 +0000 (0:00:00.398) 0:00:22.200 **** 2026-02-04 00:57:28.699546 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 00:57:28.699556 | orchestrator | 2026-02-04 00:57:28.699566 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 00:57:28.699578 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.668) 0:00:22.869 **** 2026-02-04 00:57:28.699597 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699608 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699619 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699629 | orchestrator | 2026-02-04 00:57:28.699635 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 00:57:28.699642 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.318) 0:00:23.188 **** 2026-02-04 00:57:28.699648 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699654 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699660 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699667 | orchestrator | 2026-02-04 00:57:28.699677 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 00:57:28.699684 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.313) 0:00:23.501 **** 2026-02-04 00:57:28.699690 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699696 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.699703 | orchestrator | skipping: [testbed-node-5] 2026-02-04 00:57:28.699709 | orchestrator | 2026-02-04 00:57:28.699715 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 00:57:28.699721 | orchestrator | Wednesday 04 February 2026 00:55:39 +0000 (0:00:00.293) 0:00:23.794 **** 2026-02-04 00:57:28.699728 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.699734 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.699740 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.699747 | orchestrator | 2026-02-04 00:57:28.699753 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 00:57:28.699759 | orchestrator | Wednesday 04 February 2026 00:55:40 +0000 (0:00:00.904) 0:00:24.699 **** 2026-02-04 00:57:28.699766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:57:28.699772 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:57:28.699778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:57:28.699785 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699791 | orchestrator | 2026-02-04 00:57:28.699797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 00:57:28.699804 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:00.357) 0:00:25.056 **** 2026-02-04 00:57:28.699810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:57:28.699816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:57:28.699823 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:57:28.699829 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699835 | orchestrator | 2026-02-04 00:57:28.699842 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 00:57:28.699855 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:00.419) 0:00:25.476 **** 2026-02-04 00:57:28.699862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 00:57:28.699868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 00:57:28.699874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 00:57:28.699881 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.699887 | orchestrator | 2026-02-04 00:57:28.699893 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 00:57:28.699937 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:00.373) 0:00:25.850 **** 2026-02-04 00:57:28.699947 | orchestrator | ok: [testbed-node-3] 2026-02-04 00:57:28.699953 | orchestrator | ok: [testbed-node-4] 2026-02-04 00:57:28.699959 | orchestrator | ok: [testbed-node-5] 2026-02-04 00:57:28.699966 | orchestrator | 2026-02-04 00:57:28.699972 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 00:57:28.699979 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:00.414) 0:00:26.265 **** 2026-02-04 00:57:28.699985 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 00:57:28.699991 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 00:57:28.699997 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 00:57:28.700004 | orchestrator | 2026-02-04 00:57:28.700010 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 00:57:28.700016 | orchestrator | Wednesday 04 February 2026 00:55:42 +0000 (0:00:00.499) 0:00:26.765 **** 2026-02-04 00:57:28.700023 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:57:28.700029 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:57:28.700035 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:57:28.700041 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 00:57:28.700050 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 00:57:28.700060 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 00:57:28.700071 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 00:57:28.700080 | orchestrator | 2026-02-04 00:57:28.700090 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 00:57:28.700099 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:00.995) 0:00:27.761 **** 2026-02-04 00:57:28.700109 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 00:57:28.700120 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 00:57:28.700132 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 00:57:28.700143 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 00:57:28.700153 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 00:57:28.700165 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 00:57:28.700177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 00:57:28.700184 | orchestrator | 2026-02-04 00:57:28.700190 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-04 00:57:28.700196 | orchestrator | Wednesday 04 February 2026 00:55:45 +0000 (0:00:01.924) 0:00:29.685 **** 2026-02-04 00:57:28.700203 | orchestrator | skipping: [testbed-node-3] 2026-02-04 00:57:28.700209 | orchestrator | skipping: [testbed-node-4] 2026-02-04 00:57:28.700215 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-04 00:57:28.700226 | orchestrator | 2026-02-04 00:57:28.700232 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-04 00:57:28.700244 | orchestrator | Wednesday 04 February 2026 00:55:46 +0000 (0:00:00.368) 0:00:30.053 **** 2026-02-04 00:57:28.700251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:57:28.700259 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:57:28.700265 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:57:28.700272 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:57:28.700279 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 00:57:28.700285 | orchestrator | 2026-02-04 00:57:28.700291 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-04 00:57:28.700298 | orchestrator | Wednesday 04 February 2026 00:56:32 +0000 (0:00:45.909) 0:01:15.963 **** 2026-02-04 00:57:28.700304 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700310 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700317 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700323 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700329 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700337 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700348 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-04 00:57:28.700359 | orchestrator | 2026-02-04 00:57:28.700370 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-04 00:57:28.700380 | orchestrator | Wednesday 04 February 2026 00:56:57 +0000 (0:00:25.400) 0:01:41.363 **** 2026-02-04 00:57:28.700389 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700399 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700409 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700418 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700429 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700441 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700451 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 00:57:28.700462 | orchestrator | 2026-02-04 00:57:28.700470 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-04 00:57:28.700476 | orchestrator | Wednesday 04 February 2026 00:57:09 +0000 (0:00:12.095) 0:01:53.459 **** 2026-02-04 00:57:28.700488 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700494 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:57:28.700501 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:57:28.700507 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700514 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:57:28.700526 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:57:28.700537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700547 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:57:28.700557 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:57:28.700609 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700621 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:57:28.700631 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:57:28.700637 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700644 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:57:28.700650 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:57:28.700657 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 00:57:28.700663 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 00:57:28.700669 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 00:57:28.700676 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-04 00:57:28.700682 | orchestrator | 2026-02-04 00:57:28.700689 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:57:28.700695 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-04 00:57:28.700702 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-04 00:57:28.700709 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 00:57:28.700715 | orchestrator | 2026-02-04 00:57:28.700722 | orchestrator | 2026-02-04 00:57:28.700728 | orchestrator | 2026-02-04 00:57:28.700734 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:57:28.700740 | orchestrator | Wednesday 04 February 2026 00:57:26 +0000 (0:00:17.364) 0:02:10.823 **** 2026-02-04 00:57:28.700747 | orchestrator | =============================================================================== 2026-02-04 00:57:28.700753 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.91s 2026-02-04 00:57:28.700759 | orchestrator | generate keys ---------------------------------------------------------- 25.40s 2026-02-04 00:57:28.700766 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.36s 2026-02-04 00:57:28.700772 | orchestrator | get keys from monitors ------------------------------------------------- 12.10s 2026-02-04 00:57:28.700778 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.15s 2026-02-04 00:57:28.700784 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.92s 2026-02-04 00:57:28.700791 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.75s 2026-02-04 00:57:28.700797 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.00s 2026-02-04 00:57:28.700809 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.00s 2026-02-04 00:57:28.700815 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.90s 2026-02-04 00:57:28.700821 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.84s 2026-02-04 00:57:28.700828 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.83s 2026-02-04 00:57:28.700834 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.81s 2026-02-04 00:57:28.700840 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-02-04 00:57:28.700847 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2026-02-04 00:57:28.700853 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2026-02-04 00:57:28.700859 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.65s 2026-02-04 00:57:28.700866 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.63s 2026-02-04 00:57:28.700872 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2026-02-04 00:57:28.700915 | orchestrator | ceph-facts : Set_fact build devices from resolved symlinks -------------- 0.61s 2026-02-04 00:57:28.700925 | orchestrator | 2026-02-04 00:57:28 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:28.700932 | orchestrator | 2026-02-04 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:31.744644 | orchestrator | 2026-02-04 00:57:31 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:31.746085 | orchestrator | 2026-02-04 00:57:31 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:31.746156 | orchestrator | 2026-02-04 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:34.792185 | orchestrator | 2026-02-04 00:57:34 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:34.793830 | orchestrator | 2026-02-04 00:57:34 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:34.793949 | orchestrator | 2026-02-04 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:37.833003 | orchestrator | 2026-02-04 00:57:37 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:37.833981 | orchestrator | 2026-02-04 00:57:37 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:37.834175 | orchestrator | 2026-02-04 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:40.876112 | orchestrator | 2026-02-04 00:57:40 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:40.878310 | orchestrator | 2026-02-04 00:57:40 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:40.878591 | orchestrator | 2026-02-04 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:43.913460 | orchestrator | 2026-02-04 00:57:43 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:43.916063 | orchestrator | 2026-02-04 00:57:43 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:43.916123 | orchestrator | 2026-02-04 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:46.962384 | orchestrator | 2026-02-04 00:57:46 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:46.964597 | orchestrator | 2026-02-04 00:57:46 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:46.964803 | orchestrator | 2026-02-04 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:50.024468 | orchestrator | 2026-02-04 00:57:50 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:50.025304 | orchestrator | 2026-02-04 00:57:50 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:50.025449 | orchestrator | 2026-02-04 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:53.068574 | orchestrator | 2026-02-04 00:57:53 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:53.070120 | orchestrator | 2026-02-04 00:57:53 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:53.070169 | orchestrator | 2026-02-04 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:56.101540 | orchestrator | 2026-02-04 00:57:56 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:56.104178 | orchestrator | 2026-02-04 00:57:56 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:56.104832 | orchestrator | 2026-02-04 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:57:59.157779 | orchestrator | 2026-02-04 00:57:59 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:57:59.160201 | orchestrator | 2026-02-04 00:57:59 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:57:59.160278 | orchestrator | 2026-02-04 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:02.204330 | orchestrator | 2026-02-04 00:58:02 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state STARTED 2026-02-04 00:58:02.206685 | orchestrator | 2026-02-04 00:58:02 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:58:02.206751 | orchestrator | 2026-02-04 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:05.258614 | orchestrator | 2026-02-04 00:58:05 | INFO  | Task c37dd944-b27d-4131-bf9c-f2b8bdd373f6 is in state SUCCESS 2026-02-04 00:58:05.260160 | orchestrator | 2026-02-04 00:58:05 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:58:05.261629 | orchestrator | 2026-02-04 00:58:05 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:05.261673 | orchestrator | 2026-02-04 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:08.314743 | orchestrator | 2026-02-04 00:58:08 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:58:08.315622 | orchestrator | 2026-02-04 00:58:08 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:08.315676 | orchestrator | 2026-02-04 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:11.361531 | orchestrator | 2026-02-04 00:58:11 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state STARTED 2026-02-04 00:58:11.364184 | orchestrator | 2026-02-04 00:58:11 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:11.364275 | orchestrator | 2026-02-04 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:14.407493 | orchestrator | 2026-02-04 00:58:14 | INFO  | Task 473253a7-65b9-4b71-922f-b5e27d14078e is in state SUCCESS 2026-02-04 00:58:14.407589 | orchestrator | 2026-02-04 00:58:14.407600 | orchestrator | 2026-02-04 00:58:14.407608 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-04 00:58:14.407615 | orchestrator | 2026-02-04 00:58:14.407622 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-04 00:58:14.407630 | orchestrator | Wednesday 04 February 2026 00:57:31 +0000 (0:00:00.147) 0:00:00.147 **** 2026-02-04 00:58:14.407660 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-04 00:58:14.407669 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.407736 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.407745 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 00:58:14.407752 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.407759 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-04 00:58:14.407766 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-04 00:58:14.407773 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-04 00:58:14.407780 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-04 00:58:14.407786 | orchestrator | 2026-02-04 00:58:14.407793 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-04 00:58:14.407800 | orchestrator | Wednesday 04 February 2026 00:57:36 +0000 (0:00:04.687) 0:00:04.835 **** 2026-02-04 00:58:14.407806 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-04 00:58:14.407813 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.407896 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.407952 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 00:58:14.407959 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.407966 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-04 00:58:14.407972 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-04 00:58:14.407978 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-04 00:58:14.407984 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-04 00:58:14.407991 | orchestrator | 2026-02-04 00:58:14.407997 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-04 00:58:14.408004 | orchestrator | Wednesday 04 February 2026 00:57:40 +0000 (0:00:04.295) 0:00:09.130 **** 2026-02-04 00:58:14.408011 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 00:58:14.408018 | orchestrator | 2026-02-04 00:58:14.408024 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-04 00:58:14.408031 | orchestrator | Wednesday 04 February 2026 00:57:41 +0000 (0:00:00.970) 0:00:10.101 **** 2026-02-04 00:58:14.408037 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-04 00:58:14.408044 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.408051 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.408058 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 00:58:14.408064 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.408071 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-04 00:58:14.408078 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-04 00:58:14.408093 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-04 00:58:14.408100 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-04 00:58:14.408106 | orchestrator | 2026-02-04 00:58:14.408112 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-04 00:58:14.408118 | orchestrator | Wednesday 04 February 2026 00:57:54 +0000 (0:00:12.629) 0:00:22.731 **** 2026-02-04 00:58:14.408124 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-04 00:58:14.408131 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-04 00:58:14.408138 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-04 00:58:14.408172 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-04 00:58:14.408179 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-04 00:58:14.408186 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-04 00:58:14.408192 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-04 00:58:14.408198 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-04 00:58:14.408204 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-04 00:58:14.408210 | orchestrator | 2026-02-04 00:58:14.408216 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-04 00:58:14.408222 | orchestrator | Wednesday 04 February 2026 00:57:56 +0000 (0:00:02.926) 0:00:25.657 **** 2026-02-04 00:58:14.408229 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-04 00:58:14.408235 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.408241 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.408248 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 00:58:14.408254 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 00:58:14.408260 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-04 00:58:14.408266 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-04 00:58:14.408273 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-04 00:58:14.408279 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-04 00:58:14.408285 | orchestrator | 2026-02-04 00:58:14.408291 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:58:14.408297 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 00:58:14.408305 | orchestrator | 2026-02-04 00:58:14.408311 | orchestrator | 2026-02-04 00:58:14.408317 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:58:14.408323 | orchestrator | Wednesday 04 February 2026 00:58:03 +0000 (0:00:06.657) 0:00:32.315 **** 2026-02-04 00:58:14.408329 | orchestrator | =============================================================================== 2026-02-04 00:58:14.408705 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.63s 2026-02-04 00:58:14.408714 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.66s 2026-02-04 00:58:14.408721 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.69s 2026-02-04 00:58:14.408728 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.30s 2026-02-04 00:58:14.408735 | orchestrator | Check if target directories exist --------------------------------------- 2.93s 2026-02-04 00:58:14.408752 | orchestrator | Create share directory -------------------------------------------------- 0.97s 2026-02-04 00:58:14.408758 | orchestrator | 2026-02-04 00:58:14.408772 | orchestrator | 2026-02-04 00:58:14.408779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 00:58:14.408785 | orchestrator | 2026-02-04 00:58:14.408791 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 00:58:14.408798 | orchestrator | Wednesday 04 February 2026 00:55:31 +0000 (0:00:00.261) 0:00:00.261 **** 2026-02-04 00:58:14.408804 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:58:14.408812 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:58:14.408819 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:58:14.408825 | orchestrator | 2026-02-04 00:58:14.408850 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 00:58:14.408856 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.320) 0:00:00.582 **** 2026-02-04 00:58:14.408862 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-04 00:58:14.408869 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-04 00:58:14.408875 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-04 00:58:14.408881 | orchestrator | 2026-02-04 00:58:14.408888 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-04 00:58:14.408894 | orchestrator | 2026-02-04 00:58:14.408901 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 00:58:14.408907 | orchestrator | Wednesday 04 February 2026 00:55:32 +0000 (0:00:00.499) 0:00:01.081 **** 2026-02-04 00:58:14.408914 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:58:14.408921 | orchestrator | 2026-02-04 00:58:14.408927 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-04 00:58:14.408933 | orchestrator | Wednesday 04 February 2026 00:55:33 +0000 (0:00:00.631) 0:00:01.713 **** 2026-02-04 00:58:14.408952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.408963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.408987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.408996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409049 | orchestrator | 2026-02-04 00:58:14.409056 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-04 00:58:14.409068 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:02.268) 0:00:03.981 **** 2026-02-04 00:58:14.409075 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.409082 | orchestrator | 2026-02-04 00:58:14.409088 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-04 00:58:14.409095 | orchestrator | Wednesday 04 February 2026 00:55:35 +0000 (0:00:00.119) 0:00:04.100 **** 2026-02-04 00:58:14.409102 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.409109 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.409116 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.409123 | orchestrator | 2026-02-04 00:58:14.409130 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-04 00:58:14.409136 | orchestrator | Wednesday 04 February 2026 00:55:36 +0000 (0:00:00.428) 0:00:04.529 **** 2026-02-04 00:58:14.409143 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:58:14.409150 | orchestrator | 2026-02-04 00:58:14.409158 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 00:58:14.409164 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:00.808) 0:00:05.338 **** 2026-02-04 00:58:14.409171 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:58:14.409445 | orchestrator | 2026-02-04 00:58:14.409458 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-04 00:58:14.409465 | orchestrator | Wednesday 04 February 2026 00:55:37 +0000 (0:00:00.718) 0:00:06.056 **** 2026-02-04 00:58:14.409480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.409488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.409511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.409519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.409628 | orchestrator | 2026-02-04 00:58:14.409634 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-04 00:58:14.409640 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:03.596) 0:00:09.652 **** 2026-02-04 00:58:14.409654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.409660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.409676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.409687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.409693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.409704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.409710 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.409717 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.409724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.409731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.409742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.409748 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.409754 | orchestrator | 2026-02-04 00:58:14.409760 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-04 00:58:14.409766 | orchestrator | Wednesday 04 February 2026 00:55:41 +0000 (0:00:00.646) 0:00:10.298 **** 2026-02-04 00:58:14.409773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.409784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.409791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.409798 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.409916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.409940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.409947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.409953 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.409968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.409975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.409981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.409992 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.409998 | orchestrator | 2026-02-04 00:58:14.410008 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-04 00:58:14.410069 | orchestrator | Wednesday 04 February 2026 00:55:43 +0000 (0:00:01.031) 0:00:11.330 **** 2026-02-04 00:58:14.410077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.410084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.410097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.410103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410156 | orchestrator | 2026-02-04 00:58:14.410162 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-04 00:58:14.410169 | orchestrator | Wednesday 04 February 2026 00:55:46 +0000 (0:00:03.579) 0:00:14.909 **** 2026-02-04 00:58:14.410179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.410190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.410197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.410203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.410214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.410227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.410236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.410256 | orchestrator | 2026-02-04 00:58:14.410263 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-04 00:58:14.410270 | orchestrator | Wednesday 04 February 2026 00:55:51 +0000 (0:00:05.108) 0:00:20.018 **** 2026-02-04 00:58:14.410277 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.410284 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:58:14.410290 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:58:14.410296 | orchestrator | 2026-02-04 00:58:14.410302 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-04 00:58:14.410308 | orchestrator | Wednesday 04 February 2026 00:55:53 +0000 (0:00:01.493) 0:00:21.512 **** 2026-02-04 00:58:14.410315 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.410321 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.410328 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.410335 | orchestrator | 2026-02-04 00:58:14.410341 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-04 00:58:14.410351 | orchestrator | Wednesday 04 February 2026 00:55:53 +0000 (0:00:00.551) 0:00:22.063 **** 2026-02-04 00:58:14.410358 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.410369 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.410375 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.410381 | orchestrator | 2026-02-04 00:58:14.410387 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-04 00:58:14.410394 | orchestrator | Wednesday 04 February 2026 00:55:54 +0000 (0:00:00.295) 0:00:22.359 **** 2026-02-04 00:58:14.410401 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.410407 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.410413 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.410420 | orchestrator | 2026-02-04 00:58:14.410426 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-04 00:58:14.410433 | orchestrator | Wednesday 04 February 2026 00:55:54 +0000 (0:00:00.514) 0:00:22.874 **** 2026-02-04 00:58:14.410440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.410453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.410461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.410468 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.410475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.410493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.410500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.410507 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.410520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 00:58:14.410529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 00:58:14.410537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 00:58:14.410544 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.410550 | orchestrator | 2026-02-04 00:58:14.410560 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 00:58:14.410566 | orchestrator | Wednesday 04 February 2026 00:55:55 +0000 (0:00:00.574) 0:00:23.448 **** 2026-02-04 00:58:14.410573 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.410580 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.410586 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.410593 | orchestrator | 2026-02-04 00:58:14.410599 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-04 00:58:14.410604 | orchestrator | Wednesday 04 February 2026 00:55:55 +0000 (0:00:00.285) 0:00:23.734 **** 2026-02-04 00:58:14.410610 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 00:58:14.410619 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 00:58:14.410625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 00:58:14.410633 | orchestrator | 2026-02-04 00:58:14.410640 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-04 00:58:14.410648 | orchestrator | Wednesday 04 February 2026 00:55:57 +0000 (0:00:01.794) 0:00:25.528 **** 2026-02-04 00:58:14.410655 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:58:14.410663 | orchestrator | 2026-02-04 00:58:14.410670 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-04 00:58:14.410677 | orchestrator | Wednesday 04 February 2026 00:55:58 +0000 (0:00:00.920) 0:00:26.449 **** 2026-02-04 00:58:14.410685 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.410691 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.410699 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.410706 | orchestrator | 2026-02-04 00:58:14.410714 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-04 00:58:14.410720 | orchestrator | Wednesday 04 February 2026 00:55:58 +0000 (0:00:00.791) 0:00:27.241 **** 2026-02-04 00:58:14.410727 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 00:58:14.410735 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 00:58:14.410742 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 00:58:14.410749 | orchestrator | 2026-02-04 00:58:14.410756 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-04 00:58:14.410763 | orchestrator | Wednesday 04 February 2026 00:55:59 +0000 (0:00:01.019) 0:00:28.261 **** 2026-02-04 00:58:14.410770 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:58:14.410777 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:58:14.410784 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:58:14.410790 | orchestrator | 2026-02-04 00:58:14.410798 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-04 00:58:14.410805 | orchestrator | Wednesday 04 February 2026 00:56:00 +0000 (0:00:00.279) 0:00:28.541 **** 2026-02-04 00:58:14.410812 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 00:58:14.410818 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 00:58:14.410825 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 00:58:14.410934 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 00:58:14.410942 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 00:58:14.410948 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 00:58:14.410954 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 00:58:14.410962 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 00:58:14.410969 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 00:58:14.410982 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 00:58:14.410988 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 00:58:14.410996 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 00:58:14.411002 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 00:58:14.411009 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 00:58:14.411016 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 00:58:14.411022 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 00:58:14.411029 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 00:58:14.411036 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 00:58:14.411043 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 00:58:14.411049 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 00:58:14.411056 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 00:58:14.411063 | orchestrator | 2026-02-04 00:58:14.411070 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-04 00:58:14.411076 | orchestrator | Wednesday 04 February 2026 00:56:09 +0000 (0:00:09.026) 0:00:37.567 **** 2026-02-04 00:58:14.411083 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 00:58:14.411090 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 00:58:14.411096 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 00:58:14.411103 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 00:58:14.411110 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 00:58:14.411123 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 00:58:14.411130 | orchestrator | 2026-02-04 00:58:14.411137 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-04 00:58:14.411144 | orchestrator | Wednesday 04 February 2026 00:56:12 +0000 (0:00:02.885) 0:00:40.452 **** 2026-02-04 00:58:14.411151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.411164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.411178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 00:58:14.411186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.411198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.411205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 00:58:14.411217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.411229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.411235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 00:58:14.411242 | orchestrator | 2026-02-04 00:58:14.411249 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 00:58:14.411256 | orchestrator | Wednesday 04 February 2026 00:56:14 +0000 (0:00:02.511) 0:00:42.964 **** 2026-02-04 00:58:14.411263 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.411270 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.411277 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.411283 | orchestrator | 2026-02-04 00:58:14.411290 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-04 00:58:14.411297 | orchestrator | Wednesday 04 February 2026 00:56:14 +0000 (0:00:00.231) 0:00:43.195 **** 2026-02-04 00:58:14.411303 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411311 | orchestrator | 2026-02-04 00:58:14.411318 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-04 00:58:14.411324 | orchestrator | Wednesday 04 February 2026 00:56:17 +0000 (0:00:02.236) 0:00:45.431 **** 2026-02-04 00:58:14.411332 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411338 | orchestrator | 2026-02-04 00:58:14.411345 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-04 00:58:14.411352 | orchestrator | Wednesday 04 February 2026 00:56:19 +0000 (0:00:02.338) 0:00:47.770 **** 2026-02-04 00:58:14.411359 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:58:14.411366 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:58:14.411373 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:58:14.411379 | orchestrator | 2026-02-04 00:58:14.411386 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-04 00:58:14.411396 | orchestrator | Wednesday 04 February 2026 00:56:20 +0000 (0:00:00.937) 0:00:48.707 **** 2026-02-04 00:58:14.411403 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:58:14.411409 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:58:14.411416 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:58:14.411422 | orchestrator | 2026-02-04 00:58:14.411429 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-04 00:58:14.411436 | orchestrator | Wednesday 04 February 2026 00:56:20 +0000 (0:00:00.268) 0:00:48.976 **** 2026-02-04 00:58:14.411442 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.411450 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.411461 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.411468 | orchestrator | 2026-02-04 00:58:14.411475 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-04 00:58:14.411481 | orchestrator | Wednesday 04 February 2026 00:56:20 +0000 (0:00:00.293) 0:00:49.269 **** 2026-02-04 00:58:14.411488 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411494 | orchestrator | 2026-02-04 00:58:14.411501 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-04 00:58:14.411508 | orchestrator | Wednesday 04 February 2026 00:56:36 +0000 (0:00:15.522) 0:01:04.791 **** 2026-02-04 00:58:14.411514 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411520 | orchestrator | 2026-02-04 00:58:14.411527 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 00:58:14.411534 | orchestrator | Wednesday 04 February 2026 00:56:47 +0000 (0:00:11.257) 0:01:16.049 **** 2026-02-04 00:58:14.411541 | orchestrator | 2026-02-04 00:58:14.411547 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 00:58:14.411552 | orchestrator | Wednesday 04 February 2026 00:56:47 +0000 (0:00:00.067) 0:01:16.117 **** 2026-02-04 00:58:14.411558 | orchestrator | 2026-02-04 00:58:14.411564 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 00:58:14.411570 | orchestrator | Wednesday 04 February 2026 00:56:47 +0000 (0:00:00.075) 0:01:16.193 **** 2026-02-04 00:58:14.411577 | orchestrator | 2026-02-04 00:58:14.411584 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-04 00:58:14.411591 | orchestrator | Wednesday 04 February 2026 00:56:47 +0000 (0:00:00.064) 0:01:16.257 **** 2026-02-04 00:58:14.411598 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411605 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:58:14.411612 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:58:14.411619 | orchestrator | 2026-02-04 00:58:14.411626 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-04 00:58:14.411636 | orchestrator | Wednesday 04 February 2026 00:57:01 +0000 (0:00:13.890) 0:01:30.147 **** 2026-02-04 00:58:14.411644 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:58:14.411651 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:58:14.411658 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411665 | orchestrator | 2026-02-04 00:58:14.411673 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-04 00:58:14.411680 | orchestrator | Wednesday 04 February 2026 00:57:09 +0000 (0:00:07.630) 0:01:37.777 **** 2026-02-04 00:58:14.411686 | orchestrator | changed: [testbed-node-1] 2026-02-04 00:58:14.411693 | orchestrator | changed: [testbed-node-2] 2026-02-04 00:58:14.411700 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411707 | orchestrator | 2026-02-04 00:58:14.411715 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 00:58:14.411721 | orchestrator | Wednesday 04 February 2026 00:57:17 +0000 (0:00:07.793) 0:01:45.571 **** 2026-02-04 00:58:14.411729 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 00:58:14.411735 | orchestrator | 2026-02-04 00:58:14.411741 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-04 00:58:14.411746 | orchestrator | Wednesday 04 February 2026 00:57:17 +0000 (0:00:00.671) 0:01:46.243 **** 2026-02-04 00:58:14.411753 | orchestrator | ok: [testbed-node-1] 2026-02-04 00:58:14.411759 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:58:14.411766 | orchestrator | ok: [testbed-node-2] 2026-02-04 00:58:14.411772 | orchestrator | 2026-02-04 00:58:14.411779 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-04 00:58:14.411785 | orchestrator | Wednesday 04 February 2026 00:57:18 +0000 (0:00:00.750) 0:01:46.993 **** 2026-02-04 00:58:14.411792 | orchestrator | changed: [testbed-node-0] 2026-02-04 00:58:14.411798 | orchestrator | 2026-02-04 00:58:14.411805 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-04 00:58:14.411816 | orchestrator | Wednesday 04 February 2026 00:57:20 +0000 (0:00:01.617) 0:01:48.610 **** 2026-02-04 00:58:14.411823 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-04 00:58:14.411851 | orchestrator | 2026-02-04 00:58:14.411859 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-04 00:58:14.411866 | orchestrator | Wednesday 04 February 2026 00:57:33 +0000 (0:00:13.071) 0:02:01.682 **** 2026-02-04 00:58:14.411872 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-04 00:58:14.411880 | orchestrator | 2026-02-04 00:58:14.411887 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-04 00:58:14.411893 | orchestrator | Wednesday 04 February 2026 00:58:01 +0000 (0:00:28.608) 0:02:30.290 **** 2026-02-04 00:58:14.411900 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-04 00:58:14.411907 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-04 00:58:14.411914 | orchestrator | 2026-02-04 00:58:14.411921 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-04 00:58:14.411928 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:07.266) 0:02:37.557 **** 2026-02-04 00:58:14.411934 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.411941 | orchestrator | 2026-02-04 00:58:14.411947 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-04 00:58:14.411955 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:00.136) 0:02:37.693 **** 2026-02-04 00:58:14.411962 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.411969 | orchestrator | 2026-02-04 00:58:14.411980 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-04 00:58:14.411988 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:00.119) 0:02:37.813 **** 2026-02-04 00:58:14.411994 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.412000 | orchestrator | 2026-02-04 00:58:14.412006 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-04 00:58:14.412012 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:00.120) 0:02:37.934 **** 2026-02-04 00:58:14.412018 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.412025 | orchestrator | 2026-02-04 00:58:14.412031 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-04 00:58:14.412038 | orchestrator | Wednesday 04 February 2026 00:58:10 +0000 (0:00:00.480) 0:02:38.414 **** 2026-02-04 00:58:14.412045 | orchestrator | ok: [testbed-node-0] 2026-02-04 00:58:14.412051 | orchestrator | 2026-02-04 00:58:14.412058 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 00:58:14.412065 | orchestrator | Wednesday 04 February 2026 00:58:13 +0000 (0:00:03.462) 0:02:41.877 **** 2026-02-04 00:58:14.412071 | orchestrator | skipping: [testbed-node-0] 2026-02-04 00:58:14.412078 | orchestrator | skipping: [testbed-node-1] 2026-02-04 00:58:14.412084 | orchestrator | skipping: [testbed-node-2] 2026-02-04 00:58:14.412091 | orchestrator | 2026-02-04 00:58:14.412097 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 00:58:14.412105 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 00:58:14.412113 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 00:58:14.412120 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 00:58:14.412128 | orchestrator | 2026-02-04 00:58:14.412134 | orchestrator | 2026-02-04 00:58:14.412141 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 00:58:14.412148 | orchestrator | Wednesday 04 February 2026 00:58:13 +0000 (0:00:00.410) 0:02:42.288 **** 2026-02-04 00:58:14.412160 | orchestrator | =============================================================================== 2026-02-04 00:58:14.412170 | orchestrator | service-ks-register : keystone | Creating services --------------------- 28.61s 2026-02-04 00:58:14.412177 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.52s 2026-02-04 00:58:14.412183 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 13.89s 2026-02-04 00:58:14.412189 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.07s 2026-02-04 00:58:14.412196 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.26s 2026-02-04 00:58:14.412202 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.03s 2026-02-04 00:58:14.412209 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.79s 2026-02-04 00:58:14.412216 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.63s 2026-02-04 00:58:14.412222 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.27s 2026-02-04 00:58:14.412229 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.11s 2026-02-04 00:58:14.412235 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.60s 2026-02-04 00:58:14.412242 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.58s 2026-02-04 00:58:14.412248 | orchestrator | keystone : Creating default user role ----------------------------------- 3.46s 2026-02-04 00:58:14.412255 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.89s 2026-02-04 00:58:14.412261 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.51s 2026-02-04 00:58:14.412268 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.34s 2026-02-04 00:58:14.412274 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.27s 2026-02-04 00:58:14.412280 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.24s 2026-02-04 00:58:14.412285 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.79s 2026-02-04 00:58:14.412290 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.62s 2026-02-04 00:58:14.412296 | orchestrator | 2026-02-04 00:58:14 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:14.412302 | orchestrator | 2026-02-04 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:17.479343 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:17.481720 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:17.482419 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:17.483207 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:17.486956 | orchestrator | 2026-02-04 00:58:17 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:17.487012 | orchestrator | 2026-02-04 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:20.513433 | orchestrator | 2026-02-04 00:58:20 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:20.516602 | orchestrator | 2026-02-04 00:58:20 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:20.517524 | orchestrator | 2026-02-04 00:58:20 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:20.518160 | orchestrator | 2026-02-04 00:58:20 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:20.519252 | orchestrator | 2026-02-04 00:58:20 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:20.519315 | orchestrator | 2026-02-04 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:23.543083 | orchestrator | 2026-02-04 00:58:23 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:23.543941 | orchestrator | 2026-02-04 00:58:23 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:23.544681 | orchestrator | 2026-02-04 00:58:23 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:23.546098 | orchestrator | 2026-02-04 00:58:23 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:23.547387 | orchestrator | 2026-02-04 00:58:23 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:23.547428 | orchestrator | 2026-02-04 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:26.594231 | orchestrator | 2026-02-04 00:58:26 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:26.595710 | orchestrator | 2026-02-04 00:58:26 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:26.597901 | orchestrator | 2026-02-04 00:58:26 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:26.599384 | orchestrator | 2026-02-04 00:58:26 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:26.601153 | orchestrator | 2026-02-04 00:58:26 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:26.601193 | orchestrator | 2026-02-04 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:29.640104 | orchestrator | 2026-02-04 00:58:29 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:29.642087 | orchestrator | 2026-02-04 00:58:29 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:29.643123 | orchestrator | 2026-02-04 00:58:29 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:29.644599 | orchestrator | 2026-02-04 00:58:29 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:29.645734 | orchestrator | 2026-02-04 00:58:29 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:29.645776 | orchestrator | 2026-02-04 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:32.689124 | orchestrator | 2026-02-04 00:58:32 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:32.691153 | orchestrator | 2026-02-04 00:58:32 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:32.692849 | orchestrator | 2026-02-04 00:58:32 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:32.695241 | orchestrator | 2026-02-04 00:58:32 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:32.697008 | orchestrator | 2026-02-04 00:58:32 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:32.697078 | orchestrator | 2026-02-04 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:35.743400 | orchestrator | 2026-02-04 00:58:35 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:35.746071 | orchestrator | 2026-02-04 00:58:35 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:35.747019 | orchestrator | 2026-02-04 00:58:35 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:35.748216 | orchestrator | 2026-02-04 00:58:35 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:35.749726 | orchestrator | 2026-02-04 00:58:35 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:35.749855 | orchestrator | 2026-02-04 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:38.788786 | orchestrator | 2026-02-04 00:58:38 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:38.790156 | orchestrator | 2026-02-04 00:58:38 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:38.791380 | orchestrator | 2026-02-04 00:58:38 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:38.794146 | orchestrator | 2026-02-04 00:58:38 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:38.795550 | orchestrator | 2026-02-04 00:58:38 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:38.795591 | orchestrator | 2026-02-04 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:41.837140 | orchestrator | 2026-02-04 00:58:41 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:41.839301 | orchestrator | 2026-02-04 00:58:41 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:41.840591 | orchestrator | 2026-02-04 00:58:41 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:41.842733 | orchestrator | 2026-02-04 00:58:41 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:41.844365 | orchestrator | 2026-02-04 00:58:41 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:41.844400 | orchestrator | 2026-02-04 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:44.887223 | orchestrator | 2026-02-04 00:58:44 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:44.888173 | orchestrator | 2026-02-04 00:58:44 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:44.889767 | orchestrator | 2026-02-04 00:58:44 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:44.892493 | orchestrator | 2026-02-04 00:58:44 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:44.893651 | orchestrator | 2026-02-04 00:58:44 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:44.893876 | orchestrator | 2026-02-04 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:47.947761 | orchestrator | 2026-02-04 00:58:47 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:47.947906 | orchestrator | 2026-02-04 00:58:47 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:47.950195 | orchestrator | 2026-02-04 00:58:47 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:47.950929 | orchestrator | 2026-02-04 00:58:47 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:47.951668 | orchestrator | 2026-02-04 00:58:47 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:47.951688 | orchestrator | 2026-02-04 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:51.003149 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:51.008212 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:51.009403 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:51.010621 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:51.011944 | orchestrator | 2026-02-04 00:58:51 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:51.011988 | orchestrator | 2026-02-04 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:54.057683 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:54.058065 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:54.059106 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:54.059827 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state STARTED 2026-02-04 00:58:54.060649 | orchestrator | 2026-02-04 00:58:54 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:54.060671 | orchestrator | 2026-02-04 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:58:57.104855 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:58:57.106545 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:58:57.107624 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:58:57.108975 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:58:57.111544 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task 2cf6eb5c-c530-40b2-b87b-9526b4933dac is in state SUCCESS 2026-02-04 00:58:57.113284 | orchestrator | 2026-02-04 00:58:57 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:58:57.113347 | orchestrator | 2026-02-04 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:00.157429 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:00.157518 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:00.158084 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:00.158832 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:00.160945 | orchestrator | 2026-02-04 00:59:00 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:00.161041 | orchestrator | 2026-02-04 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:03.240665 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:03.346074 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:03.346280 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:03.351150 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:03.351225 | orchestrator | 2026-02-04 00:59:03 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:03.351255 | orchestrator | 2026-02-04 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:06.379207 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:06.380381 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:06.380940 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:06.381691 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:06.383065 | orchestrator | 2026-02-04 00:59:06 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:06.383097 | orchestrator | 2026-02-04 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:09.404249 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:09.404489 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:09.405198 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:09.405524 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:09.406129 | orchestrator | 2026-02-04 00:59:09 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:09.406146 | orchestrator | 2026-02-04 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:12.435880 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:12.436235 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:12.437086 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:12.438271 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:12.438701 | orchestrator | 2026-02-04 00:59:12 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:12.438727 | orchestrator | 2026-02-04 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:15.459820 | orchestrator | 2026-02-04 00:59:15 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:15.459943 | orchestrator | 2026-02-04 00:59:15 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:15.460485 | orchestrator | 2026-02-04 00:59:15 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:15.461073 | orchestrator | 2026-02-04 00:59:15 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:15.461628 | orchestrator | 2026-02-04 00:59:15 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:15.461663 | orchestrator | 2026-02-04 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:18.485376 | orchestrator | 2026-02-04 00:59:18 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:18.485651 | orchestrator | 2026-02-04 00:59:18 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:18.487802 | orchestrator | 2026-02-04 00:59:18 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:18.489318 | orchestrator | 2026-02-04 00:59:18 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:18.491553 | orchestrator | 2026-02-04 00:59:18 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:18.491598 | orchestrator | 2026-02-04 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:21.525526 | orchestrator | 2026-02-04 00:59:21 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:21.528404 | orchestrator | 2026-02-04 00:59:21 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:21.528996 | orchestrator | 2026-02-04 00:59:21 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:21.529472 | orchestrator | 2026-02-04 00:59:21 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:21.530413 | orchestrator | 2026-02-04 00:59:21 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:21.530450 | orchestrator | 2026-02-04 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:24.553043 | orchestrator | 2026-02-04 00:59:24 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:24.553149 | orchestrator | 2026-02-04 00:59:24 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:24.553830 | orchestrator | 2026-02-04 00:59:24 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:24.554288 | orchestrator | 2026-02-04 00:59:24 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:24.555132 | orchestrator | 2026-02-04 00:59:24 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:24.555171 | orchestrator | 2026-02-04 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:27.583134 | orchestrator | 2026-02-04 00:59:27 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:27.583216 | orchestrator | 2026-02-04 00:59:27 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:27.583980 | orchestrator | 2026-02-04 00:59:27 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:27.587981 | orchestrator | 2026-02-04 00:59:27 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:27.588031 | orchestrator | 2026-02-04 00:59:27 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:27.588041 | orchestrator | 2026-02-04 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:30.608870 | orchestrator | 2026-02-04 00:59:30 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:30.608958 | orchestrator | 2026-02-04 00:59:30 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:30.610198 | orchestrator | 2026-02-04 00:59:30 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:30.610669 | orchestrator | 2026-02-04 00:59:30 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:30.611481 | orchestrator | 2026-02-04 00:59:30 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:30.611550 | orchestrator | 2026-02-04 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:33.638802 | orchestrator | 2026-02-04 00:59:33 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:33.639527 | orchestrator | 2026-02-04 00:59:33 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:33.639550 | orchestrator | 2026-02-04 00:59:33 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:33.639857 | orchestrator | 2026-02-04 00:59:33 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:33.640575 | orchestrator | 2026-02-04 00:59:33 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:33.640689 | orchestrator | 2026-02-04 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:36.670420 | orchestrator | 2026-02-04 00:59:36 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:36.670932 | orchestrator | 2026-02-04 00:59:36 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:36.671372 | orchestrator | 2026-02-04 00:59:36 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:36.672079 | orchestrator | 2026-02-04 00:59:36 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:36.672882 | orchestrator | 2026-02-04 00:59:36 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:36.672919 | orchestrator | 2026-02-04 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:39.704487 | orchestrator | 2026-02-04 00:59:39 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:39.705621 | orchestrator | 2026-02-04 00:59:39 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:39.708346 | orchestrator | 2026-02-04 00:59:39 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:39.708421 | orchestrator | 2026-02-04 00:59:39 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:39.709783 | orchestrator | 2026-02-04 00:59:39 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:39.709960 | orchestrator | 2026-02-04 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:42.730327 | orchestrator | 2026-02-04 00:59:42 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:42.730430 | orchestrator | 2026-02-04 00:59:42 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:42.730977 | orchestrator | 2026-02-04 00:59:42 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:42.731535 | orchestrator | 2026-02-04 00:59:42 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:42.732098 | orchestrator | 2026-02-04 00:59:42 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:42.732125 | orchestrator | 2026-02-04 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:45.764273 | orchestrator | 2026-02-04 00:59:45 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:45.764489 | orchestrator | 2026-02-04 00:59:45 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:45.765304 | orchestrator | 2026-02-04 00:59:45 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:45.767313 | orchestrator | 2026-02-04 00:59:45 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:45.768011 | orchestrator | 2026-02-04 00:59:45 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:45.768092 | orchestrator | 2026-02-04 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:48.798786 | orchestrator | 2026-02-04 00:59:48 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:48.800432 | orchestrator | 2026-02-04 00:59:48 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:48.802186 | orchestrator | 2026-02-04 00:59:48 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:48.803605 | orchestrator | 2026-02-04 00:59:48 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:48.805108 | orchestrator | 2026-02-04 00:59:48 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:48.805150 | orchestrator | 2026-02-04 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:51.834749 | orchestrator | 2026-02-04 00:59:51 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:51.834824 | orchestrator | 2026-02-04 00:59:51 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:51.836057 | orchestrator | 2026-02-04 00:59:51 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:51.836519 | orchestrator | 2026-02-04 00:59:51 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:51.837316 | orchestrator | 2026-02-04 00:59:51 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:51.837353 | orchestrator | 2026-02-04 00:59:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:54.863909 | orchestrator | 2026-02-04 00:59:54 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:54.863989 | orchestrator | 2026-02-04 00:59:54 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:54.864588 | orchestrator | 2026-02-04 00:59:54 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:54.865135 | orchestrator | 2026-02-04 00:59:54 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:54.865774 | orchestrator | 2026-02-04 00:59:54 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:54.865800 | orchestrator | 2026-02-04 00:59:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 00:59:57.891851 | orchestrator | 2026-02-04 00:59:57 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 00:59:57.892244 | orchestrator | 2026-02-04 00:59:57 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 00:59:57.893225 | orchestrator | 2026-02-04 00:59:57 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 00:59:57.894054 | orchestrator | 2026-02-04 00:59:57 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 00:59:57.894991 | orchestrator | 2026-02-04 00:59:57 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 00:59:57.895026 | orchestrator | 2026-02-04 00:59:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:00.927944 | orchestrator | 2026-02-04 01:00:00 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:00.928216 | orchestrator | 2026-02-04 01:00:00 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:00.928995 | orchestrator | 2026-02-04 01:00:00 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:00.929456 | orchestrator | 2026-02-04 01:00:00 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:00.930193 | orchestrator | 2026-02-04 01:00:00 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:00.930222 | orchestrator | 2026-02-04 01:00:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:03.957095 | orchestrator | 2026-02-04 01:00:03 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:03.957384 | orchestrator | 2026-02-04 01:00:03 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:03.958108 | orchestrator | 2026-02-04 01:00:03 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:03.958653 | orchestrator | 2026-02-04 01:00:03 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:03.959852 | orchestrator | 2026-02-04 01:00:03 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:03.959882 | orchestrator | 2026-02-04 01:00:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:06.985850 | orchestrator | 2026-02-04 01:00:06 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:06.986186 | orchestrator | 2026-02-04 01:00:06 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:06.987595 | orchestrator | 2026-02-04 01:00:06 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:06.988312 | orchestrator | 2026-02-04 01:00:06 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:06.989165 | orchestrator | 2026-02-04 01:00:06 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:06.989285 | orchestrator | 2026-02-04 01:00:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:10.026354 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:10.026762 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:10.027638 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:10.027925 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:10.028774 | orchestrator | 2026-02-04 01:00:10 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:10.028804 | orchestrator | 2026-02-04 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:13.052870 | orchestrator | 2026-02-04 01:00:13 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:13.053196 | orchestrator | 2026-02-04 01:00:13 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:13.053942 | orchestrator | 2026-02-04 01:00:13 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:13.054537 | orchestrator | 2026-02-04 01:00:13 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:13.055206 | orchestrator | 2026-02-04 01:00:13 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:13.055277 | orchestrator | 2026-02-04 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:16.106295 | orchestrator | 2026-02-04 01:00:16 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:16.106813 | orchestrator | 2026-02-04 01:00:16 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:16.107282 | orchestrator | 2026-02-04 01:00:16 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:16.107889 | orchestrator | 2026-02-04 01:00:16 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:16.108497 | orchestrator | 2026-02-04 01:00:16 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:16.108597 | orchestrator | 2026-02-04 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:19.131983 | orchestrator | 2026-02-04 01:00:19 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:19.134422 | orchestrator | 2026-02-04 01:00:19 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:19.136326 | orchestrator | 2026-02-04 01:00:19 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:19.138151 | orchestrator | 2026-02-04 01:00:19 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:19.139997 | orchestrator | 2026-02-04 01:00:19 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:19.140057 | orchestrator | 2026-02-04 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:22.166515 | orchestrator | 2026-02-04 01:00:22 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state STARTED 2026-02-04 01:00:22.166778 | orchestrator | 2026-02-04 01:00:22 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:22.167489 | orchestrator | 2026-02-04 01:00:22 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:22.168105 | orchestrator | 2026-02-04 01:00:22 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:22.168706 | orchestrator | 2026-02-04 01:00:22 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:22.168726 | orchestrator | 2026-02-04 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:25.191894 | orchestrator | 2026-02-04 01:00:25 | INFO  | Task d76730fc-b5ed-4c78-9ac6-bc322b253258 is in state SUCCESS 2026-02-04 01:00:25.192984 | orchestrator | 2026-02-04 01:00:25.193031 | orchestrator | 2026-02-04 01:00:25.193040 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-04 01:00:25.193073 | orchestrator | 2026-02-04 01:00:25.193082 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-04 01:00:25.193089 | orchestrator | Wednesday 04 February 2026 00:58:08 +0000 (0:00:00.225) 0:00:00.225 **** 2026-02-04 01:00:25.193096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-04 01:00:25.193104 | orchestrator | 2026-02-04 01:00:25.193111 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-04 01:00:25.193118 | orchestrator | Wednesday 04 February 2026 00:58:08 +0000 (0:00:00.220) 0:00:00.445 **** 2026-02-04 01:00:25.193125 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-04 01:00:25.193132 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-04 01:00:25.193140 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-04 01:00:25.193147 | orchestrator | 2026-02-04 01:00:25.193153 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-04 01:00:25.193160 | orchestrator | Wednesday 04 February 2026 00:58:09 +0000 (0:00:01.200) 0:00:01.646 **** 2026-02-04 01:00:25.193167 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-04 01:00:25.193174 | orchestrator | 2026-02-04 01:00:25.193198 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-04 01:00:25.193205 | orchestrator | Wednesday 04 February 2026 00:58:10 +0000 (0:00:01.361) 0:00:03.007 **** 2026-02-04 01:00:25.193212 | orchestrator | changed: [testbed-manager] 2026-02-04 01:00:25.193218 | orchestrator | 2026-02-04 01:00:25.193225 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-04 01:00:25.193232 | orchestrator | Wednesday 04 February 2026 00:58:11 +0000 (0:00:00.914) 0:00:03.921 **** 2026-02-04 01:00:25.193238 | orchestrator | changed: [testbed-manager] 2026-02-04 01:00:25.193244 | orchestrator | 2026-02-04 01:00:25.193251 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-04 01:00:25.193266 | orchestrator | Wednesday 04 February 2026 00:58:12 +0000 (0:00:00.829) 0:00:04.750 **** 2026-02-04 01:00:25.193273 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-04 01:00:25.193280 | orchestrator | ok: [testbed-manager] 2026-02-04 01:00:25.193286 | orchestrator | 2026-02-04 01:00:25.193293 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-04 01:00:25.193299 | orchestrator | Wednesday 04 February 2026 00:58:44 +0000 (0:00:32.164) 0:00:36.915 **** 2026-02-04 01:00:25.193305 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-04 01:00:25.193312 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-04 01:00:25.193318 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-04 01:00:25.193394 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-04 01:00:25.193400 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-04 01:00:25.193407 | orchestrator | 2026-02-04 01:00:25.193413 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-04 01:00:25.193419 | orchestrator | Wednesday 04 February 2026 00:58:48 +0000 (0:00:03.914) 0:00:40.829 **** 2026-02-04 01:00:25.193425 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-04 01:00:25.193432 | orchestrator | 2026-02-04 01:00:25.193438 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-04 01:00:25.193445 | orchestrator | Wednesday 04 February 2026 00:58:49 +0000 (0:00:00.450) 0:00:41.280 **** 2026-02-04 01:00:25.193451 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:00:25.193457 | orchestrator | 2026-02-04 01:00:25.193463 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-04 01:00:25.193469 | orchestrator | Wednesday 04 February 2026 00:58:49 +0000 (0:00:00.132) 0:00:41.412 **** 2026-02-04 01:00:25.193475 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:00:25.193481 | orchestrator | 2026-02-04 01:00:25.193487 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-04 01:00:25.193493 | orchestrator | Wednesday 04 February 2026 00:58:49 +0000 (0:00:00.452) 0:00:41.864 **** 2026-02-04 01:00:25.193500 | orchestrator | changed: [testbed-manager] 2026-02-04 01:00:25.193506 | orchestrator | 2026-02-04 01:00:25.193512 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-04 01:00:25.193518 | orchestrator | Wednesday 04 February 2026 00:58:51 +0000 (0:00:01.441) 0:00:43.306 **** 2026-02-04 01:00:25.193536 | orchestrator | changed: [testbed-manager] 2026-02-04 01:00:25.193569 | orchestrator | 2026-02-04 01:00:25.193576 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-04 01:00:25.193582 | orchestrator | Wednesday 04 February 2026 00:58:51 +0000 (0:00:00.714) 0:00:44.020 **** 2026-02-04 01:00:25.193589 | orchestrator | changed: [testbed-manager] 2026-02-04 01:00:25.193596 | orchestrator | 2026-02-04 01:00:25.193602 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-04 01:00:25.193609 | orchestrator | Wednesday 04 February 2026 00:58:52 +0000 (0:00:00.547) 0:00:44.567 **** 2026-02-04 01:00:25.193615 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-04 01:00:25.193622 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-04 01:00:25.193641 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-04 01:00:25.193654 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-04 01:00:25.193661 | orchestrator | 2026-02-04 01:00:25.193667 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:00:25.193674 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:00:25.193681 | orchestrator | 2026-02-04 01:00:25.193688 | orchestrator | 2026-02-04 01:00:25.193706 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:00:25.193715 | orchestrator | Wednesday 04 February 2026 00:58:53 +0000 (0:00:01.409) 0:00:45.977 **** 2026-02-04 01:00:25.193722 | orchestrator | =============================================================================== 2026-02-04 01:00:25.193728 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 32.16s 2026-02-04 01:00:25.193735 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.91s 2026-02-04 01:00:25.193741 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.44s 2026-02-04 01:00:25.193748 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.41s 2026-02-04 01:00:25.193754 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.36s 2026-02-04 01:00:25.193760 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.20s 2026-02-04 01:00:25.193766 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2026-02-04 01:00:25.193772 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.83s 2026-02-04 01:00:25.193779 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.71s 2026-02-04 01:00:25.193785 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.55s 2026-02-04 01:00:25.193791 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.45s 2026-02-04 01:00:25.193797 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2026-02-04 01:00:25.193804 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2026-02-04 01:00:25.193810 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-04 01:00:25.193817 | orchestrator | 2026-02-04 01:00:25.193823 | orchestrator | 2026-02-04 01:00:25.193830 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:00:25.193836 | orchestrator | 2026-02-04 01:00:25.193843 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:00:25.193854 | orchestrator | Wednesday 04 February 2026 00:58:21 +0000 (0:00:00.198) 0:00:00.198 **** 2026-02-04 01:00:25.193861 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:00:25.193868 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:00:25.193875 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:00:25.193882 | orchestrator | 2026-02-04 01:00:25.193889 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:00:25.193896 | orchestrator | Wednesday 04 February 2026 00:58:21 +0000 (0:00:00.268) 0:00:00.466 **** 2026-02-04 01:00:25.193903 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-04 01:00:25.193910 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-04 01:00:25.193917 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-04 01:00:25.193923 | orchestrator | 2026-02-04 01:00:25.193931 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-04 01:00:25.193937 | orchestrator | 2026-02-04 01:00:25.193944 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 01:00:25.193950 | orchestrator | Wednesday 04 February 2026 00:58:21 +0000 (0:00:00.407) 0:00:00.873 **** 2026-02-04 01:00:25.193957 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:00:25.193964 | orchestrator | 2026-02-04 01:00:25.193970 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-04 01:00:25.193981 | orchestrator | Wednesday 04 February 2026 00:58:22 +0000 (0:00:00.441) 0:00:01.314 **** 2026-02-04 01:00:25.193987 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-04 01:00:25.193993 | orchestrator | 2026-02-04 01:00:25.194000 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-04 01:00:25.194006 | orchestrator | Wednesday 04 February 2026 00:58:26 +0000 (0:00:04.210) 0:00:05.525 **** 2026-02-04 01:00:25.194043 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-04 01:00:25.194053 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-04 01:00:25.194060 | orchestrator | 2026-02-04 01:00:25.194067 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-04 01:00:25.194074 | orchestrator | Wednesday 04 February 2026 00:58:33 +0000 (0:00:07.290) 0:00:12.816 **** 2026-02-04 01:00:25.194081 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:00:25.194087 | orchestrator | 2026-02-04 01:00:25.194094 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-04 01:00:25.194100 | orchestrator | Wednesday 04 February 2026 00:58:37 +0000 (0:00:03.632) 0:00:16.448 **** 2026-02-04 01:00:25.194106 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:00:25.194113 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-04 01:00:25.194119 | orchestrator | 2026-02-04 01:00:25.194126 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-04 01:00:25.194132 | orchestrator | Wednesday 04 February 2026 00:58:41 +0000 (0:00:04.010) 0:00:20.458 **** 2026-02-04 01:00:25.194138 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:00:25.194144 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-04 01:00:25.194151 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-04 01:00:25.194157 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-04 01:00:25.194164 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-04 01:00:25.194170 | orchestrator | 2026-02-04 01:00:25.194177 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-04 01:00:25.194183 | orchestrator | Wednesday 04 February 2026 00:58:59 +0000 (0:00:18.043) 0:00:38.502 **** 2026-02-04 01:00:25.194197 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-04 01:00:25.194204 | orchestrator | 2026-02-04 01:00:25.194210 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-04 01:00:25.194217 | orchestrator | Wednesday 04 February 2026 00:59:03 +0000 (0:00:04.271) 0:00:42.774 **** 2026-02-04 01:00:25.194226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.194241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.194255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.194264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194327 | orchestrator | 2026-02-04 01:00:25.194334 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-04 01:00:25.194341 | orchestrator | Wednesday 04 February 2026 00:59:06 +0000 (0:00:02.631) 0:00:45.405 **** 2026-02-04 01:00:25.194347 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-04 01:00:25.194355 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-04 01:00:25.194361 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-04 01:00:25.194368 | orchestrator | 2026-02-04 01:00:25.194375 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-04 01:00:25.194382 | orchestrator | Wednesday 04 February 2026 00:59:07 +0000 (0:00:01.594) 0:00:47.000 **** 2026-02-04 01:00:25.194389 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.194396 | orchestrator | 2026-02-04 01:00:25.194403 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-04 01:00:25.194410 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:00.121) 0:00:47.121 **** 2026-02-04 01:00:25.194416 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.194423 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:00:25.194429 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:00:25.194436 | orchestrator | 2026-02-04 01:00:25.194442 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 01:00:25.194449 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:00.428) 0:00:47.549 **** 2026-02-04 01:00:25.194455 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:00:25.194462 | orchestrator | 2026-02-04 01:00:25.194468 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-04 01:00:25.194475 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:00.504) 0:00:48.054 **** 2026-02-04 01:00:25.194486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.194502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.194568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.194577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.194691 | orchestrator | 2026-02-04 01:00:25.194697 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-04 01:00:25.194705 | orchestrator | Wednesday 04 February 2026 00:59:12 +0000 (0:00:03.623) 0:00:51.677 **** 2026-02-04 01:00:25.194712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.194725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194749 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.194759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.194766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194779 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:00:25.194786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.194800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194814 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:00:25.194820 | orchestrator | 2026-02-04 01:00:25.194826 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-04 01:00:25.194833 | orchestrator | Wednesday 04 February 2026 00:59:13 +0000 (0:00:00.951) 0:00:52.629 **** 2026-02-04 01:00:25.194842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.194849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194864 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.194876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.194887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194930 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:00:25.194938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.194945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.194963 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:00:25.194970 | orchestrator | 2026-02-04 01:00:25.194979 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-04 01:00:25.194986 | orchestrator | Wednesday 04 February 2026 00:59:14 +0000 (0:00:01.167) 0:00:53.796 **** 2026-02-04 01:00:25.194993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195072 | orchestrator | 2026-02-04 01:00:25.195078 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-04 01:00:25.195084 | orchestrator | Wednesday 04 February 2026 00:59:18 +0000 (0:00:03.753) 0:00:57.549 **** 2026-02-04 01:00:25.195090 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195096 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:00:25.195103 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:00:25.195109 | orchestrator | 2026-02-04 01:00:25.195121 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-04 01:00:25.195128 | orchestrator | Wednesday 04 February 2026 00:59:20 +0000 (0:00:02.371) 0:00:59.921 **** 2026-02-04 01:00:25.195134 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:00:25.195140 | orchestrator | 2026-02-04 01:00:25.195146 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-04 01:00:25.195152 | orchestrator | Wednesday 04 February 2026 00:59:21 +0000 (0:00:01.124) 0:01:01.046 **** 2026-02-04 01:00:25.195159 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.195165 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:00:25.195171 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:00:25.195178 | orchestrator | 2026-02-04 01:00:25.195185 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-04 01:00:25.195191 | orchestrator | Wednesday 04 February 2026 00:59:22 +0000 (0:00:00.499) 0:01:01.546 **** 2026-02-04 01:00:25.195202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195281 | orchestrator | 2026-02-04 01:00:25.195287 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-04 01:00:25.195293 | orchestrator | Wednesday 04 February 2026 00:59:31 +0000 (0:00:09.039) 0:01:10.585 **** 2026-02-04 01:00:25.195300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.195311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.195322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.195328 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.195336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.195348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.195356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.195367 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:00:25.195375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 01:00:25.195386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.195393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:00:25.195399 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:00:25.195405 | orchestrator | 2026-02-04 01:00:25.195411 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-04 01:00:25.195418 | orchestrator | Wednesday 04 February 2026 00:59:32 +0000 (0:00:01.324) 0:01:11.910 **** 2026-02-04 01:00:25.195427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 01:00:25.195456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:00:25.195504 | orchestrator | 2026-02-04 01:00:25.195512 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 01:00:25.195518 | orchestrator | Wednesday 04 February 2026 00:59:36 +0000 (0:00:04.177) 0:01:16.088 **** 2026-02-04 01:00:25.195525 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:00:25.195532 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:00:25.195538 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:00:25.195545 | orchestrator | 2026-02-04 01:00:25.195552 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-04 01:00:25.195558 | orchestrator | Wednesday 04 February 2026 00:59:37 +0000 (0:00:00.284) 0:01:16.372 **** 2026-02-04 01:00:25.195565 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195572 | orchestrator | 2026-02-04 01:00:25.195579 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-04 01:00:25.195589 | orchestrator | Wednesday 04 February 2026 00:59:39 +0000 (0:00:02.266) 0:01:18.639 **** 2026-02-04 01:00:25.195596 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195614 | orchestrator | 2026-02-04 01:00:25.195620 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-04 01:00:25.195645 | orchestrator | Wednesday 04 February 2026 00:59:41 +0000 (0:00:02.355) 0:01:20.996 **** 2026-02-04 01:00:25.195652 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195659 | orchestrator | 2026-02-04 01:00:25.195666 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 01:00:25.195672 | orchestrator | Wednesday 04 February 2026 00:59:54 +0000 (0:00:12.266) 0:01:33.262 **** 2026-02-04 01:00:25.195678 | orchestrator | 2026-02-04 01:00:25.195684 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 01:00:25.195691 | orchestrator | Wednesday 04 February 2026 00:59:54 +0000 (0:00:00.057) 0:01:33.320 **** 2026-02-04 01:00:25.195697 | orchestrator | 2026-02-04 01:00:25.195703 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 01:00:25.195710 | orchestrator | Wednesday 04 February 2026 00:59:54 +0000 (0:00:00.062) 0:01:33.383 **** 2026-02-04 01:00:25.195723 | orchestrator | 2026-02-04 01:00:25.195730 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-04 01:00:25.195737 | orchestrator | Wednesday 04 February 2026 00:59:54 +0000 (0:00:00.065) 0:01:33.449 **** 2026-02-04 01:00:25.195744 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:00:25.195751 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195758 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:00:25.195765 | orchestrator | 2026-02-04 01:00:25.195772 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-04 01:00:25.195779 | orchestrator | Wednesday 04 February 2026 01:00:05 +0000 (0:00:11.636) 0:01:45.085 **** 2026-02-04 01:00:25.195785 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195791 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:00:25.195798 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:00:25.195804 | orchestrator | 2026-02-04 01:00:25.195810 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-04 01:00:25.195817 | orchestrator | Wednesday 04 February 2026 01:00:16 +0000 (0:00:10.362) 0:01:55.448 **** 2026-02-04 01:00:25.195823 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:00:25.195834 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:00:25.195840 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:00:25.195846 | orchestrator | 2026-02-04 01:00:25.195853 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:00:25.195859 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:00:25.195866 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:00:25.195873 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:00:25.195880 | orchestrator | 2026-02-04 01:00:25.195888 | orchestrator | 2026-02-04 01:00:25.195895 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:00:25.195901 | orchestrator | Wednesday 04 February 2026 01:00:24 +0000 (0:00:08.085) 0:02:03.533 **** 2026-02-04 01:00:25.195909 | orchestrator | =============================================================================== 2026-02-04 01:00:25.195915 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 18.04s 2026-02-04 01:00:25.195921 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.27s 2026-02-04 01:00:25.195928 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.64s 2026-02-04 01:00:25.195935 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.36s 2026-02-04 01:00:25.195942 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.04s 2026-02-04 01:00:25.195949 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.09s 2026-02-04 01:00:25.195955 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.29s 2026-02-04 01:00:25.195963 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.27s 2026-02-04 01:00:25.195969 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.21s 2026-02-04 01:00:25.195976 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.18s 2026-02-04 01:00:25.195982 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.01s 2026-02-04 01:00:25.195988 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.75s 2026-02-04 01:00:25.195995 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.63s 2026-02-04 01:00:25.196001 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.62s 2026-02-04 01:00:25.196007 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.63s 2026-02-04 01:00:25.196013 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.37s 2026-02-04 01:00:25.196026 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.36s 2026-02-04 01:00:25.196033 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.27s 2026-02-04 01:00:25.196039 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.59s 2026-02-04 01:00:25.196045 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.33s 2026-02-04 01:00:25.196057 | orchestrator | 2026-02-04 01:00:25 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:25.196194 | orchestrator | 2026-02-04 01:00:25 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:25.196207 | orchestrator | 2026-02-04 01:00:25 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:25.196214 | orchestrator | 2026-02-04 01:00:25 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:25.197181 | orchestrator | 2026-02-04 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:28.241297 | orchestrator | 2026-02-04 01:00:28 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:28.242562 | orchestrator | 2026-02-04 01:00:28 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:28.244090 | orchestrator | 2026-02-04 01:00:28 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state STARTED 2026-02-04 01:00:28.245392 | orchestrator | 2026-02-04 01:00:28 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:28.247101 | orchestrator | 2026-02-04 01:00:28 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:28.247183 | orchestrator | 2026-02-04 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:31.278189 | orchestrator | 2026-02-04 01:00:31 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:31.278801 | orchestrator | 2026-02-04 01:00:31 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:31.279471 | orchestrator | 2026-02-04 01:00:31 | INFO  | Task 6a09cec0-01ce-4961-8fe6-23a946af439a is in state SUCCESS 2026-02-04 01:00:31.280735 | orchestrator | 2026-02-04 01:00:31 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:31.281364 | orchestrator | 2026-02-04 01:00:31 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:31.281657 | orchestrator | 2026-02-04 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:34.319506 | orchestrator | 2026-02-04 01:00:34 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:34.321244 | orchestrator | 2026-02-04 01:00:34 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:34.323081 | orchestrator | 2026-02-04 01:00:34 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:34.324874 | orchestrator | 2026-02-04 01:00:34 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:34.324910 | orchestrator | 2026-02-04 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:37.371665 | orchestrator | 2026-02-04 01:00:37 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:37.374959 | orchestrator | 2026-02-04 01:00:37 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:37.376995 | orchestrator | 2026-02-04 01:00:37 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:37.378884 | orchestrator | 2026-02-04 01:00:37 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:37.378911 | orchestrator | 2026-02-04 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:40.418478 | orchestrator | 2026-02-04 01:00:40 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:40.419985 | orchestrator | 2026-02-04 01:00:40 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:40.421333 | orchestrator | 2026-02-04 01:00:40 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:40.422465 | orchestrator | 2026-02-04 01:00:40 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:40.422770 | orchestrator | 2026-02-04 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:43.478217 | orchestrator | 2026-02-04 01:00:43 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:43.478296 | orchestrator | 2026-02-04 01:00:43 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:43.479689 | orchestrator | 2026-02-04 01:00:43 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:43.481539 | orchestrator | 2026-02-04 01:00:43 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:43.481590 | orchestrator | 2026-02-04 01:00:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:46.522082 | orchestrator | 2026-02-04 01:00:46 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:46.523041 | orchestrator | 2026-02-04 01:00:46 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:46.524901 | orchestrator | 2026-02-04 01:00:46 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:46.526482 | orchestrator | 2026-02-04 01:00:46 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:46.526544 | orchestrator | 2026-02-04 01:00:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:49.562144 | orchestrator | 2026-02-04 01:00:49 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:49.562739 | orchestrator | 2026-02-04 01:00:49 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:49.563715 | orchestrator | 2026-02-04 01:00:49 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:49.564550 | orchestrator | 2026-02-04 01:00:49 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:49.564576 | orchestrator | 2026-02-04 01:00:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:52.608120 | orchestrator | 2026-02-04 01:00:52 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:52.609276 | orchestrator | 2026-02-04 01:00:52 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:52.610946 | orchestrator | 2026-02-04 01:00:52 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:52.612451 | orchestrator | 2026-02-04 01:00:52 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:52.612483 | orchestrator | 2026-02-04 01:00:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:55.647940 | orchestrator | 2026-02-04 01:00:55 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:55.649423 | orchestrator | 2026-02-04 01:00:55 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:55.650987 | orchestrator | 2026-02-04 01:00:55 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:55.652627 | orchestrator | 2026-02-04 01:00:55 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:55.652669 | orchestrator | 2026-02-04 01:00:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:00:58.684373 | orchestrator | 2026-02-04 01:00:58 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:00:58.685399 | orchestrator | 2026-02-04 01:00:58 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:00:58.686177 | orchestrator | 2026-02-04 01:00:58 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:00:58.688681 | orchestrator | 2026-02-04 01:00:58 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:00:58.688722 | orchestrator | 2026-02-04 01:00:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:01.722121 | orchestrator | 2026-02-04 01:01:01 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:01:01.723874 | orchestrator | 2026-02-04 01:01:01 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:01.726219 | orchestrator | 2026-02-04 01:01:01 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:01.728271 | orchestrator | 2026-02-04 01:01:01 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:01.728480 | orchestrator | 2026-02-04 01:01:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:04.777373 | orchestrator | 2026-02-04 01:01:04 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:01:04.778859 | orchestrator | 2026-02-04 01:01:04 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:04.781555 | orchestrator | 2026-02-04 01:01:04 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:04.783727 | orchestrator | 2026-02-04 01:01:04 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:04.785021 | orchestrator | 2026-02-04 01:01:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:07.826910 | orchestrator | 2026-02-04 01:01:07 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:01:07.830040 | orchestrator | 2026-02-04 01:01:07 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:07.832697 | orchestrator | 2026-02-04 01:01:07 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:07.835241 | orchestrator | 2026-02-04 01:01:07 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:07.835516 | orchestrator | 2026-02-04 01:01:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:10.878412 | orchestrator | 2026-02-04 01:01:10 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:01:10.879540 | orchestrator | 2026-02-04 01:01:10 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:10.881302 | orchestrator | 2026-02-04 01:01:10 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:10.882815 | orchestrator | 2026-02-04 01:01:10 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:10.882863 | orchestrator | 2026-02-04 01:01:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:13.923272 | orchestrator | 2026-02-04 01:01:13 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state STARTED 2026-02-04 01:01:13.924815 | orchestrator | 2026-02-04 01:01:13 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:13.926425 | orchestrator | 2026-02-04 01:01:13 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:13.928254 | orchestrator | 2026-02-04 01:01:13 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:13.928303 | orchestrator | 2026-02-04 01:01:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:16.974242 | orchestrator | 2026-02-04 01:01:16 | INFO  | Task 9e1f886c-6de0-453d-a6d5-e0138ec6e095 is in state SUCCESS 2026-02-04 01:01:16.975098 | orchestrator | 2026-02-04 01:01:16.975149 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 01:01:16.975156 | orchestrator | 2.16.14 2026-02-04 01:01:16.975163 | orchestrator | 2026-02-04 01:01:16.975169 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-04 01:01:16.975175 | orchestrator | 2026-02-04 01:01:16.975181 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-04 01:01:16.975187 | orchestrator | Wednesday 04 February 2026 00:58:58 +0000 (0:00:00.257) 0:00:00.257 **** 2026-02-04 01:01:16.975193 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975199 | orchestrator | 2026-02-04 01:01:16.975205 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-04 01:01:16.975211 | orchestrator | Wednesday 04 February 2026 00:59:00 +0000 (0:00:01.753) 0:00:02.011 **** 2026-02-04 01:01:16.975216 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975222 | orchestrator | 2026-02-04 01:01:16.975227 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-04 01:01:16.975233 | orchestrator | Wednesday 04 February 2026 00:59:01 +0000 (0:00:01.202) 0:00:03.214 **** 2026-02-04 01:01:16.975239 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975245 | orchestrator | 2026-02-04 01:01:16.975250 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-04 01:01:16.975256 | orchestrator | Wednesday 04 February 2026 00:59:02 +0000 (0:00:01.420) 0:00:04.635 **** 2026-02-04 01:01:16.975261 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975267 | orchestrator | 2026-02-04 01:01:16.975272 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-04 01:01:16.975278 | orchestrator | Wednesday 04 February 2026 00:59:04 +0000 (0:00:01.581) 0:00:06.217 **** 2026-02-04 01:01:16.975283 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975289 | orchestrator | 2026-02-04 01:01:16.975294 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-04 01:01:16.975300 | orchestrator | Wednesday 04 February 2026 00:59:05 +0000 (0:00:00.908) 0:00:07.126 **** 2026-02-04 01:01:16.975305 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975311 | orchestrator | 2026-02-04 01:01:16.975317 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-04 01:01:16.975322 | orchestrator | Wednesday 04 February 2026 00:59:06 +0000 (0:00:01.450) 0:00:08.576 **** 2026-02-04 01:01:16.975328 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975333 | orchestrator | 2026-02-04 01:01:16.975339 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-04 01:01:16.975344 | orchestrator | Wednesday 04 February 2026 00:59:07 +0000 (0:00:01.088) 0:00:09.664 **** 2026-02-04 01:01:16.975350 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975355 | orchestrator | 2026-02-04 01:01:16.975361 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-04 01:01:16.975366 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:00.995) 0:00:10.659 **** 2026-02-04 01:01:16.975644 | orchestrator | changed: [testbed-manager] 2026-02-04 01:01:16.975669 | orchestrator | 2026-02-04 01:01:16.975675 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-04 01:01:16.975681 | orchestrator | Wednesday 04 February 2026 01:00:04 +0000 (0:00:56.039) 0:01:06.698 **** 2026-02-04 01:01:16.975689 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:01:16.975696 | orchestrator | 2026-02-04 01:01:16.975703 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 01:01:16.975709 | orchestrator | 2026-02-04 01:01:16.975715 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 01:01:16.975722 | orchestrator | Wednesday 04 February 2026 01:00:04 +0000 (0:00:00.145) 0:01:06.844 **** 2026-02-04 01:01:16.975729 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.975735 | orchestrator | 2026-02-04 01:01:16.975742 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 01:01:16.975748 | orchestrator | 2026-02-04 01:01:16.975755 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 01:01:16.975762 | orchestrator | Wednesday 04 February 2026 01:00:06 +0000 (0:00:01.571) 0:01:08.415 **** 2026-02-04 01:01:16.975768 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.975774 | orchestrator | 2026-02-04 01:01:16.975781 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 01:01:16.975787 | orchestrator | 2026-02-04 01:01:16.975794 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 01:01:16.975801 | orchestrator | Wednesday 04 February 2026 01:00:17 +0000 (0:00:11.251) 0:01:19.667 **** 2026-02-04 01:01:16.975810 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.975819 | orchestrator | 2026-02-04 01:01:16.975831 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:01:16.975871 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:01:16.975881 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:01:16.975891 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:01:16.975901 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:01:16.975910 | orchestrator | 2026-02-04 01:01:16.975919 | orchestrator | 2026-02-04 01:01:16.975929 | orchestrator | 2026-02-04 01:01:16.975949 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:01:16.975958 | orchestrator | Wednesday 04 February 2026 01:00:29 +0000 (0:00:11.559) 0:01:31.226 **** 2026-02-04 01:01:16.975968 | orchestrator | =============================================================================== 2026-02-04 01:01:16.975978 | orchestrator | Create admin user ------------------------------------------------------ 56.04s 2026-02-04 01:01:16.975997 | orchestrator | Restart ceph manager service ------------------------------------------- 24.38s 2026-02-04 01:01:16.976004 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.75s 2026-02-04 01:01:16.976010 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.58s 2026-02-04 01:01:16.976017 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.45s 2026-02-04 01:01:16.976043 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.42s 2026-02-04 01:01:16.976050 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.20s 2026-02-04 01:01:16.976057 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.09s 2026-02-04 01:01:16.976063 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.00s 2026-02-04 01:01:16.976070 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.91s 2026-02-04 01:01:16.976084 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2026-02-04 01:01:16.976091 | orchestrator | 2026-02-04 01:01:16.976097 | orchestrator | 2026-02-04 01:01:16.976103 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:01:16.976109 | orchestrator | 2026-02-04 01:01:16.976114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:01:16.976120 | orchestrator | Wednesday 04 February 2026 00:58:19 +0000 (0:00:00.218) 0:00:00.218 **** 2026-02-04 01:01:16.976125 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:16.976131 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:16.976136 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:16.976164 | orchestrator | 2026-02-04 01:01:16.976197 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:01:16.976204 | orchestrator | Wednesday 04 February 2026 00:58:19 +0000 (0:00:00.246) 0:00:00.464 **** 2026-02-04 01:01:16.976210 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-04 01:01:16.976216 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-04 01:01:16.976221 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-04 01:01:16.976227 | orchestrator | 2026-02-04 01:01:16.976232 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-04 01:01:16.976238 | orchestrator | 2026-02-04 01:01:16.976244 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:01:16.976249 | orchestrator | Wednesday 04 February 2026 00:58:19 +0000 (0:00:00.397) 0:00:00.862 **** 2026-02-04 01:01:16.976273 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:16.976280 | orchestrator | 2026-02-04 01:01:16.976285 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-04 01:01:16.976291 | orchestrator | Wednesday 04 February 2026 00:58:20 +0000 (0:00:00.597) 0:00:01.460 **** 2026-02-04 01:01:16.976310 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-04 01:01:16.976315 | orchestrator | 2026-02-04 01:01:16.976321 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-04 01:01:16.976326 | orchestrator | Wednesday 04 February 2026 00:58:25 +0000 (0:00:04.523) 0:00:05.983 **** 2026-02-04 01:01:16.976393 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-04 01:01:16.976399 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-04 01:01:16.976404 | orchestrator | 2026-02-04 01:01:16.976410 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-04 01:01:16.976416 | orchestrator | Wednesday 04 February 2026 00:58:31 +0000 (0:00:06.895) 0:00:12.879 **** 2026-02-04 01:01:16.976421 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-04 01:01:16.976427 | orchestrator | 2026-02-04 01:01:16.976432 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-04 01:01:16.976438 | orchestrator | Wednesday 04 February 2026 00:58:36 +0000 (0:00:04.102) 0:00:16.982 **** 2026-02-04 01:01:16.976443 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:01:16.976449 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-04 01:01:16.976454 | orchestrator | 2026-02-04 01:01:16.976460 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-04 01:01:16.976466 | orchestrator | Wednesday 04 February 2026 00:58:40 +0000 (0:00:03.994) 0:00:20.976 **** 2026-02-04 01:01:16.976471 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:01:16.976477 | orchestrator | 2026-02-04 01:01:16.976482 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-04 01:01:16.976488 | orchestrator | Wednesday 04 February 2026 00:58:43 +0000 (0:00:03.643) 0:00:24.620 **** 2026-02-04 01:01:16.976494 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-04 01:01:16.976519 | orchestrator | 2026-02-04 01:01:16.976525 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-04 01:01:16.976530 | orchestrator | Wednesday 04 February 2026 00:58:48 +0000 (0:00:04.304) 0:00:28.924 **** 2026-02-04 01:01:16.976577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.976592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.976602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.976613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976743 | orchestrator | 2026-02-04 01:01:16.976749 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-04 01:01:16.976755 | orchestrator | Wednesday 04 February 2026 00:58:50 +0000 (0:00:02.961) 0:00:31.885 **** 2026-02-04 01:01:16.976760 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.976766 | orchestrator | 2026-02-04 01:01:16.976771 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-04 01:01:16.976777 | orchestrator | Wednesday 04 February 2026 00:58:51 +0000 (0:00:00.139) 0:00:32.025 **** 2026-02-04 01:01:16.976782 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.976788 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:16.976793 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:16.976799 | orchestrator | 2026-02-04 01:01:16.976804 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:01:16.976812 | orchestrator | Wednesday 04 February 2026 00:58:51 +0000 (0:00:00.315) 0:00:32.341 **** 2026-02-04 01:01:16.976818 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:16.976824 | orchestrator | 2026-02-04 01:01:16.976829 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-04 01:01:16.976838 | orchestrator | Wednesday 04 February 2026 00:58:52 +0000 (0:00:00.691) 0:00:33.033 **** 2026-02-04 01:01:16.976844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.976850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.976856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.976865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.976995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.977044 | orchestrator | 2026-02-04 01:01:16.977050 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-04 01:01:16.977055 | orchestrator | Wednesday 04 February 2026 00:58:58 +0000 (0:00:06.335) 0:00:39.369 **** 2026-02-04 01:01:16.977596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.977627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.977639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977699 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.977716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.977722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.977728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977759 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:16.977768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.977774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.977783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977807 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:16.977813 | orchestrator | 2026-02-04 01:01:16.977821 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-04 01:01:16.977827 | orchestrator | Wednesday 04 February 2026 00:59:00 +0000 (0:00:01.848) 0:00:41.217 **** 2026-02-04 01:01:16.977837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.977843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.977852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977879 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.977888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.977895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.977907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.977918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.977933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.977987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978006 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:16.978051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978069 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:16.978077 | orchestrator | 2026-02-04 01:01:16.978091 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-04 01:01:16.978102 | orchestrator | Wednesday 04 February 2026 00:59:01 +0000 (0:00:01.493) 0:00:42.710 **** 2026-02-04 01:01:16.978118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.978138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.978147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.978156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978476 | orchestrator | 2026-02-04 01:01:16.978482 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-04 01:01:16.978488 | orchestrator | Wednesday 04 February 2026 00:59:09 +0000 (0:00:07.719) 0:00:50.430 **** 2026-02-04 01:01:16.978502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.978512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.978519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.978524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978745 | orchestrator | 2026-02-04 01:01:16.978750 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-04 01:01:16.978756 | orchestrator | Wednesday 04 February 2026 00:59:26 +0000 (0:00:16.944) 0:01:07.375 **** 2026-02-04 01:01:16.978762 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 01:01:16.978767 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 01:01:16.978775 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 01:01:16.978781 | orchestrator | 2026-02-04 01:01:16.978787 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-04 01:01:16.978792 | orchestrator | Wednesday 04 February 2026 00:59:31 +0000 (0:00:04.770) 0:01:12.145 **** 2026-02-04 01:01:16.978800 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 01:01:16.978814 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 01:01:16.978823 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 01:01:16.978832 | orchestrator | 2026-02-04 01:01:16.978841 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-04 01:01:16.978851 | orchestrator | Wednesday 04 February 2026 00:59:35 +0000 (0:00:03.839) 0:01:15.985 **** 2026-02-04 01:01:16.978861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.978871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.978879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.978894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.978957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.978971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979125 | orchestrator | 2026-02-04 01:01:16.979160 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-04 01:01:16.979176 | orchestrator | Wednesday 04 February 2026 00:59:37 +0000 (0:00:02.914) 0:01:18.899 **** 2026-02-04 01:01:16.979190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.979204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.979217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.979238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.979476 | orchestrator | 2026-02-04 01:01:16.979492 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:01:16.979507 | orchestrator | Wednesday 04 February 2026 00:59:41 +0000 (0:00:03.218) 0:01:22.117 **** 2026-02-04 01:01:16.979529 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.979543 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:16.979582 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:16.979597 | orchestrator | 2026-02-04 01:01:16.979620 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-04 01:01:16.979634 | orchestrator | Wednesday 04 February 2026 00:59:41 +0000 (0:00:00.762) 0:01:22.879 **** 2026-02-04 01:01:16.979649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.979665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.979688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.979759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.979785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979798 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.979808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 01:01:16.979823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 01:01:16.979854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979913 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:16.979927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:01:16.979964 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:16.979976 | orchestrator | 2026-02-04 01:01:16.979988 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-04 01:01:16.980001 | orchestrator | Wednesday 04 February 2026 00:59:42 +0000 (0:00:00.678) 0:01:23.558 **** 2026-02-04 01:01:16.980015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.980036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.980049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 01:01:16.980061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:01:16.980260 | orchestrator | 2026-02-04 01:01:16.980268 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 01:01:16.980335 | orchestrator | Wednesday 04 February 2026 00:59:47 +0000 (0:00:04.583) 0:01:28.141 **** 2026-02-04 01:01:16.980347 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:16.980356 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:16.980376 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:16.980384 | orchestrator | 2026-02-04 01:01:16.980393 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-04 01:01:16.980402 | orchestrator | Wednesday 04 February 2026 00:59:47 +0000 (0:00:00.259) 0:01:28.401 **** 2026-02-04 01:01:16.980411 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-04 01:01:16.980420 | orchestrator | 2026-02-04 01:01:16.980430 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-04 01:01:16.980438 | orchestrator | Wednesday 04 February 2026 00:59:49 +0000 (0:00:02.262) 0:01:30.663 **** 2026-02-04 01:01:16.980447 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:01:16.980456 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-04 01:01:16.980464 | orchestrator | 2026-02-04 01:01:16.980473 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-04 01:01:16.980482 | orchestrator | Wednesday 04 February 2026 00:59:52 +0000 (0:00:02.462) 0:01:33.126 **** 2026-02-04 01:01:16.980490 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980498 | orchestrator | 2026-02-04 01:01:16.980507 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 01:01:16.980516 | orchestrator | Wednesday 04 February 2026 01:00:08 +0000 (0:00:16.628) 0:01:49.754 **** 2026-02-04 01:01:16.980524 | orchestrator | 2026-02-04 01:01:16.980533 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 01:01:16.980541 | orchestrator | Wednesday 04 February 2026 01:00:08 +0000 (0:00:00.104) 0:01:49.858 **** 2026-02-04 01:01:16.980600 | orchestrator | 2026-02-04 01:01:16.980612 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 01:01:16.980622 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:00.132) 0:01:49.991 **** 2026-02-04 01:01:16.980631 | orchestrator | 2026-02-04 01:01:16.980639 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-04 01:01:16.980647 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:00.070) 0:01:50.061 **** 2026-02-04 01:01:16.980656 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.980664 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980673 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.980682 | orchestrator | 2026-02-04 01:01:16.980690 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-04 01:01:16.980698 | orchestrator | Wednesday 04 February 2026 01:00:21 +0000 (0:00:12.259) 0:02:02.321 **** 2026-02-04 01:01:16.980706 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980715 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.980723 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.980732 | orchestrator | 2026-02-04 01:01:16.980741 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-04 01:01:16.980749 | orchestrator | Wednesday 04 February 2026 01:00:33 +0000 (0:00:11.723) 0:02:14.045 **** 2026-02-04 01:01:16.980758 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980765 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.980774 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.980782 | orchestrator | 2026-02-04 01:01:16.980790 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-04 01:01:16.980798 | orchestrator | Wednesday 04 February 2026 01:00:42 +0000 (0:00:09.413) 0:02:23.459 **** 2026-02-04 01:01:16.980807 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980816 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.980824 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.980831 | orchestrator | 2026-02-04 01:01:16.980840 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-04 01:01:16.980849 | orchestrator | Wednesday 04 February 2026 01:00:48 +0000 (0:00:05.677) 0:02:29.137 **** 2026-02-04 01:01:16.980857 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.980866 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.980881 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980892 | orchestrator | 2026-02-04 01:01:16.980901 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-04 01:01:16.980911 | orchestrator | Wednesday 04 February 2026 01:00:56 +0000 (0:00:08.614) 0:02:37.752 **** 2026-02-04 01:01:16.980920 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980929 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:16.980937 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:16.980946 | orchestrator | 2026-02-04 01:01:16.980954 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-04 01:01:16.980963 | orchestrator | Wednesday 04 February 2026 01:01:08 +0000 (0:00:11.935) 0:02:49.688 **** 2026-02-04 01:01:16.980971 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:16.980981 | orchestrator | 2026-02-04 01:01:16.980991 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:01:16.981001 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:01:16.981010 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:01:16.981020 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:01:16.981028 | orchestrator | 2026-02-04 01:01:16.981037 | orchestrator | 2026-02-04 01:01:16.981051 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:01:16.981060 | orchestrator | Wednesday 04 February 2026 01:01:16 +0000 (0:00:07.322) 0:02:57.010 **** 2026-02-04 01:01:16.981069 | orchestrator | =============================================================================== 2026-02-04 01:01:16.981077 | orchestrator | designate : Copying over designate.conf -------------------------------- 16.94s 2026-02-04 01:01:16.981095 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.63s 2026-02-04 01:01:16.981105 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 12.26s 2026-02-04 01:01:16.981114 | orchestrator | designate : Restart designate-worker container ------------------------- 11.94s 2026-02-04 01:01:16.981123 | orchestrator | designate : Restart designate-api container ---------------------------- 11.72s 2026-02-04 01:01:16.981132 | orchestrator | designate : Restart designate-central container ------------------------- 9.41s 2026-02-04 01:01:16.981142 | orchestrator | designate : Restart designate-mdns container ---------------------------- 8.62s 2026-02-04 01:01:16.981151 | orchestrator | designate : Copying over config.json files for services ----------------- 7.72s 2026-02-04 01:01:16.981159 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.32s 2026-02-04 01:01:16.981165 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.90s 2026-02-04 01:01:16.981170 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.34s 2026-02-04 01:01:16.981176 | orchestrator | designate : Restart designate-producer container ------------------------ 5.68s 2026-02-04 01:01:16.981182 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.77s 2026-02-04 01:01:16.981187 | orchestrator | designate : Check designate containers ---------------------------------- 4.58s 2026-02-04 01:01:16.981193 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.52s 2026-02-04 01:01:16.981198 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.30s 2026-02-04 01:01:16.981204 | orchestrator | service-ks-register : designate | Creating projects --------------------- 4.10s 2026-02-04 01:01:16.981210 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.99s 2026-02-04 01:01:16.981220 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.84s 2026-02-04 01:01:16.981229 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.64s 2026-02-04 01:01:16.981244 | orchestrator | 2026-02-04 01:01:16 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:16.981253 | orchestrator | 2026-02-04 01:01:16 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:16.981262 | orchestrator | 2026-02-04 01:01:16 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:16.981271 | orchestrator | 2026-02-04 01:01:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:20.015917 | orchestrator | 2026-02-04 01:01:20 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:20.016320 | orchestrator | 2026-02-04 01:01:20 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:20.018115 | orchestrator | 2026-02-04 01:01:20 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:20.018971 | orchestrator | 2026-02-04 01:01:20 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:20.019669 | orchestrator | 2026-02-04 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:23.056780 | orchestrator | 2026-02-04 01:01:23 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:23.058366 | orchestrator | 2026-02-04 01:01:23 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:23.059844 | orchestrator | 2026-02-04 01:01:23 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:23.061134 | orchestrator | 2026-02-04 01:01:23 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:23.061187 | orchestrator | 2026-02-04 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:26.100060 | orchestrator | 2026-02-04 01:01:26 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:26.101516 | orchestrator | 2026-02-04 01:01:26 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:26.103525 | orchestrator | 2026-02-04 01:01:26 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:26.105039 | orchestrator | 2026-02-04 01:01:26 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:26.106781 | orchestrator | 2026-02-04 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:29.153000 | orchestrator | 2026-02-04 01:01:29 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state STARTED 2026-02-04 01:01:29.154161 | orchestrator | 2026-02-04 01:01:29 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:29.156197 | orchestrator | 2026-02-04 01:01:29 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:29.157702 | orchestrator | 2026-02-04 01:01:29 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:29.157737 | orchestrator | 2026-02-04 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:32.199275 | orchestrator | 2026-02-04 01:01:32 | INFO  | Task e0579c5e-83da-443f-86d3-6371a081f619 is in state STARTED 2026-02-04 01:01:32.200889 | orchestrator | 2026-02-04 01:01:32 | INFO  | Task 7963a861-cfde-470a-ac6a-306bf69cf493 is in state SUCCESS 2026-02-04 01:01:32.202152 | orchestrator | 2026-02-04 01:01:32.202190 | orchestrator | 2026-02-04 01:01:32.202199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:01:32.202207 | orchestrator | 2026-02-04 01:01:32.202214 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:01:32.202236 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.186) 0:00:00.186 **** 2026-02-04 01:01:32.202243 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:01:32.202250 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:01:32.202257 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:01:32.202263 | orchestrator | 2026-02-04 01:01:32.202269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:01:32.202276 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.219) 0:00:00.406 **** 2026-02-04 01:01:32.202282 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-04 01:01:32.202289 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-04 01:01:32.202296 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-04 01:01:32.202303 | orchestrator | 2026-02-04 01:01:32.202309 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-04 01:01:32.202316 | orchestrator | 2026-02-04 01:01:32.202322 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 01:01:32.202328 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:00.298) 0:00:00.704 **** 2026-02-04 01:01:32.202335 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:32.202341 | orchestrator | 2026-02-04 01:01:32.202348 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-04 01:01:32.202354 | orchestrator | Wednesday 04 February 2026 01:00:29 +0000 (0:00:00.394) 0:00:01.099 **** 2026-02-04 01:01:32.202360 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-04 01:01:32.202366 | orchestrator | 2026-02-04 01:01:32.202373 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-04 01:01:32.202379 | orchestrator | Wednesday 04 February 2026 01:00:31 +0000 (0:00:02.893) 0:00:03.992 **** 2026-02-04 01:01:32.202386 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-04 01:01:32.202392 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-04 01:01:32.202399 | orchestrator | 2026-02-04 01:01:32.202405 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-04 01:01:32.202411 | orchestrator | Wednesday 04 February 2026 01:00:37 +0000 (0:00:05.970) 0:00:09.963 **** 2026-02-04 01:01:32.202417 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:01:32.202424 | orchestrator | 2026-02-04 01:01:32.202430 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-04 01:01:32.202437 | orchestrator | Wednesday 04 February 2026 01:00:41 +0000 (0:00:03.290) 0:00:13.253 **** 2026-02-04 01:01:32.202443 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:01:32.202449 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-04 01:01:32.202455 | orchestrator | 2026-02-04 01:01:32.202462 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-04 01:01:32.202468 | orchestrator | Wednesday 04 February 2026 01:00:45 +0000 (0:00:04.137) 0:00:17.391 **** 2026-02-04 01:01:32.202475 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:01:32.202481 | orchestrator | 2026-02-04 01:01:32.202580 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-04 01:01:32.202588 | orchestrator | Wednesday 04 February 2026 01:00:48 +0000 (0:00:03.421) 0:00:20.812 **** 2026-02-04 01:01:32.202594 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-04 01:01:32.202601 | orchestrator | 2026-02-04 01:01:32.202607 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 01:01:32.202612 | orchestrator | Wednesday 04 February 2026 01:00:52 +0000 (0:00:04.069) 0:00:24.882 **** 2026-02-04 01:01:32.202619 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:32.202625 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:32.202632 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:32.202646 | orchestrator | 2026-02-04 01:01:32.202653 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-04 01:01:32.202659 | orchestrator | Wednesday 04 February 2026 01:00:53 +0000 (0:00:00.292) 0:00:25.174 **** 2026-02-04 01:01:32.202673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.202693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.202700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.202706 | orchestrator | 2026-02-04 01:01:32.202712 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-04 01:01:32.202718 | orchestrator | Wednesday 04 February 2026 01:00:53 +0000 (0:00:00.833) 0:00:26.007 **** 2026-02-04 01:01:32.202724 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:32.202730 | orchestrator | 2026-02-04 01:01:32.202737 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-04 01:01:32.202742 | orchestrator | Wednesday 04 February 2026 01:00:54 +0000 (0:00:00.146) 0:00:26.153 **** 2026-02-04 01:01:32.202748 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:32.202754 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:32.202760 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:32.202766 | orchestrator | 2026-02-04 01:01:32.202773 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 01:01:32.202779 | orchestrator | Wednesday 04 February 2026 01:00:54 +0000 (0:00:00.469) 0:00:26.623 **** 2026-02-04 01:01:32.202791 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:01:32.202798 | orchestrator | 2026-02-04 01:01:32.202804 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-04 01:01:32.202811 | orchestrator | Wednesday 04 February 2026 01:00:55 +0000 (0:00:00.511) 0:00:27.135 **** 2026-02-04 01:01:32.202822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.202837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.202844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.202850 | orchestrator | 2026-02-04 01:01:32.202857 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-04 01:01:32.202863 | orchestrator | Wednesday 04 February 2026 01:00:56 +0000 (0:00:01.337) 0:00:28.472 **** 2026-02-04 01:01:32.202870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.202882 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:32.202889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.202899 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:32.202909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.202916 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:32.202923 | orchestrator | 2026-02-04 01:01:32.202929 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-04 01:01:32.202936 | orchestrator | Wednesday 04 February 2026 01:00:57 +0000 (0:00:00.658) 0:00:29.131 **** 2026-02-04 01:01:32.202942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.202949 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:32.202956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.202966 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:32.202973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.202979 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:32.202985 | orchestrator | 2026-02-04 01:01:32.202991 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-04 01:01:32.203001 | orchestrator | Wednesday 04 February 2026 01:00:57 +0000 (0:00:00.892) 0:00:30.024 **** 2026-02-04 01:01:32.203012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203036 | orchestrator | 2026-02-04 01:01:32.203043 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-04 01:01:32.203050 | orchestrator | Wednesday 04 February 2026 01:00:59 +0000 (0:00:01.392) 0:00:31.416 **** 2026-02-04 01:01:32.203056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203085 | orchestrator | 2026-02-04 01:01:32.203092 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-04 01:01:32.203098 | orchestrator | Wednesday 04 February 2026 01:01:01 +0000 (0:00:02.327) 0:00:33.744 **** 2026-02-04 01:01:32.203105 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 01:01:32.203117 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 01:01:32.203124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 01:01:32.203131 | orchestrator | 2026-02-04 01:01:32.203137 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-04 01:01:32.203144 | orchestrator | Wednesday 04 February 2026 01:01:03 +0000 (0:00:01.584) 0:00:35.329 **** 2026-02-04 01:01:32.203150 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:32.203156 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:32.203163 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:32.203170 | orchestrator | 2026-02-04 01:01:32.203177 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-04 01:01:32.203184 | orchestrator | Wednesday 04 February 2026 01:01:04 +0000 (0:00:01.349) 0:00:36.678 **** 2026-02-04 01:01:32.203191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.203250 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:01:32.203265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.203273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:01:32.203285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 01:01:32.203293 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:01:32.203299 | orchestrator | 2026-02-04 01:01:32.203311 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-04 01:01:32.203318 | orchestrator | Wednesday 04 February 2026 01:01:05 +0000 (0:00:00.471) 0:00:37.149 **** 2026-02-04 01:01:32.203325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 01:01:32.203354 | orchestrator | 2026-02-04 01:01:32.203361 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-04 01:01:32.203367 | orchestrator | Wednesday 04 February 2026 01:01:06 +0000 (0:00:01.261) 0:00:38.411 **** 2026-02-04 01:01:32.203375 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:32.203381 | orchestrator | 2026-02-04 01:01:32.203388 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-04 01:01:32.203394 | orchestrator | Wednesday 04 February 2026 01:01:09 +0000 (0:00:02.720) 0:00:41.132 **** 2026-02-04 01:01:32.203401 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:32.203407 | orchestrator | 2026-02-04 01:01:32.203415 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-04 01:01:32.203422 | orchestrator | Wednesday 04 February 2026 01:01:11 +0000 (0:00:02.431) 0:00:43.563 **** 2026-02-04 01:01:32.203433 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:32.203439 | orchestrator | 2026-02-04 01:01:32.203446 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 01:01:32.203457 | orchestrator | Wednesday 04 February 2026 01:01:26 +0000 (0:00:14.647) 0:00:58.211 **** 2026-02-04 01:01:32.203463 | orchestrator | 2026-02-04 01:01:32.203469 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 01:01:32.203476 | orchestrator | Wednesday 04 February 2026 01:01:26 +0000 (0:00:00.057) 0:00:58.268 **** 2026-02-04 01:01:32.203482 | orchestrator | 2026-02-04 01:01:32.203489 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 01:01:32.203495 | orchestrator | Wednesday 04 February 2026 01:01:26 +0000 (0:00:00.057) 0:00:58.326 **** 2026-02-04 01:01:32.203502 | orchestrator | 2026-02-04 01:01:32.203509 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-04 01:01:32.203516 | orchestrator | Wednesday 04 February 2026 01:01:26 +0000 (0:00:00.060) 0:00:58.387 **** 2026-02-04 01:01:32.203522 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:01:32.203542 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:01:32.203550 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:01:32.203556 | orchestrator | 2026-02-04 01:01:32.203563 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:01:32.203570 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:01:32.203578 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:01:32.203584 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:01:32.203591 | orchestrator | 2026-02-04 01:01:32.203597 | orchestrator | 2026-02-04 01:01:32.203604 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:01:32.203610 | orchestrator | Wednesday 04 February 2026 01:01:30 +0000 (0:00:04.416) 0:01:02.803 **** 2026-02-04 01:01:32.203618 | orchestrator | =============================================================================== 2026-02-04 01:01:32.203622 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.65s 2026-02-04 01:01:32.203626 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 5.97s 2026-02-04 01:01:32.203630 | orchestrator | placement : Restart placement-api container ----------------------------- 4.42s 2026-02-04 01:01:32.203634 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.14s 2026-02-04 01:01:32.203638 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.07s 2026-02-04 01:01:32.203642 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.42s 2026-02-04 01:01:32.203646 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.29s 2026-02-04 01:01:32.203650 | orchestrator | service-ks-register : placement | Creating services --------------------- 2.89s 2026-02-04 01:01:32.203654 | orchestrator | placement : Creating placement databases -------------------------------- 2.72s 2026-02-04 01:01:32.203658 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.43s 2026-02-04 01:01:32.203662 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.33s 2026-02-04 01:01:32.203666 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.58s 2026-02-04 01:01:32.203670 | orchestrator | placement : Copying over config.json files for services ----------------- 1.39s 2026-02-04 01:01:32.203673 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.35s 2026-02-04 01:01:32.203678 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.34s 2026-02-04 01:01:32.203682 | orchestrator | placement : Check placement containers ---------------------------------- 1.26s 2026-02-04 01:01:32.203686 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.89s 2026-02-04 01:01:32.203690 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.83s 2026-02-04 01:01:32.203697 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.66s 2026-02-04 01:01:32.203702 | orchestrator | placement : include_tasks ----------------------------------------------- 0.51s 2026-02-04 01:01:32.203706 | orchestrator | 2026-02-04 01:01:32 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:32.204134 | orchestrator | 2026-02-04 01:01:32 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:32.205399 | orchestrator | 2026-02-04 01:01:32 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:32.205518 | orchestrator | 2026-02-04 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:35.242694 | orchestrator | 2026-02-04 01:01:35 | INFO  | Task e0579c5e-83da-443f-86d3-6371a081f619 is in state STARTED 2026-02-04 01:01:35.243225 | orchestrator | 2026-02-04 01:01:35 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:35.244808 | orchestrator | 2026-02-04 01:01:35 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:35.245613 | orchestrator | 2026-02-04 01:01:35 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:35.245687 | orchestrator | 2026-02-04 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:38.272840 | orchestrator | 2026-02-04 01:01:38 | INFO  | Task e0579c5e-83da-443f-86d3-6371a081f619 is in state SUCCESS 2026-02-04 01:01:38.273790 | orchestrator | 2026-02-04 01:01:38 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:38.274244 | orchestrator | 2026-02-04 01:01:38 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:38.275029 | orchestrator | 2026-02-04 01:01:38 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:38.276046 | orchestrator | 2026-02-04 01:01:38 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:38.276083 | orchestrator | 2026-02-04 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:41.306125 | orchestrator | 2026-02-04 01:01:41 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:41.308647 | orchestrator | 2026-02-04 01:01:41 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:41.312308 | orchestrator | 2026-02-04 01:01:41 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:41.314083 | orchestrator | 2026-02-04 01:01:41 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:41.314127 | orchestrator | 2026-02-04 01:01:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:44.352134 | orchestrator | 2026-02-04 01:01:44 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:44.352500 | orchestrator | 2026-02-04 01:01:44 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:44.353168 | orchestrator | 2026-02-04 01:01:44 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:44.353997 | orchestrator | 2026-02-04 01:01:44 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:44.354043 | orchestrator | 2026-02-04 01:01:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:47.480518 | orchestrator | 2026-02-04 01:01:47 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:47.480594 | orchestrator | 2026-02-04 01:01:47 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:47.480613 | orchestrator | 2026-02-04 01:01:47 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:47.480618 | orchestrator | 2026-02-04 01:01:47 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:47.480622 | orchestrator | 2026-02-04 01:01:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:50.440552 | orchestrator | 2026-02-04 01:01:50 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:50.441556 | orchestrator | 2026-02-04 01:01:50 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:50.443633 | orchestrator | 2026-02-04 01:01:50 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:50.445725 | orchestrator | 2026-02-04 01:01:50 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:50.445769 | orchestrator | 2026-02-04 01:01:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:53.479120 | orchestrator | 2026-02-04 01:01:53 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:53.479990 | orchestrator | 2026-02-04 01:01:53 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:53.480499 | orchestrator | 2026-02-04 01:01:53 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:53.481511 | orchestrator | 2026-02-04 01:01:53 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:53.481539 | orchestrator | 2026-02-04 01:01:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:56.521379 | orchestrator | 2026-02-04 01:01:56 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:56.523410 | orchestrator | 2026-02-04 01:01:56 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:56.524921 | orchestrator | 2026-02-04 01:01:56 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:56.526949 | orchestrator | 2026-02-04 01:01:56 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:56.526989 | orchestrator | 2026-02-04 01:01:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:01:59.563870 | orchestrator | 2026-02-04 01:01:59 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:01:59.565249 | orchestrator | 2026-02-04 01:01:59 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:01:59.565338 | orchestrator | 2026-02-04 01:01:59 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:01:59.566465 | orchestrator | 2026-02-04 01:01:59 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:01:59.567105 | orchestrator | 2026-02-04 01:01:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:02.598617 | orchestrator | 2026-02-04 01:02:02 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:02.599622 | orchestrator | 2026-02-04 01:02:02 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:02.600334 | orchestrator | 2026-02-04 01:02:02 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:02.601466 | orchestrator | 2026-02-04 01:02:02 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:02:02.601628 | orchestrator | 2026-02-04 01:02:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:05.644230 | orchestrator | 2026-02-04 01:02:05 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:05.644979 | orchestrator | 2026-02-04 01:02:05 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:05.646537 | orchestrator | 2026-02-04 01:02:05 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:05.647300 | orchestrator | 2026-02-04 01:02:05 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:02:05.647328 | orchestrator | 2026-02-04 01:02:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:08.685961 | orchestrator | 2026-02-04 01:02:08 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:08.686045 | orchestrator | 2026-02-04 01:02:08 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:08.686053 | orchestrator | 2026-02-04 01:02:08 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:08.686059 | orchestrator | 2026-02-04 01:02:08 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:02:08.686078 | orchestrator | 2026-02-04 01:02:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:11.724881 | orchestrator | 2026-02-04 01:02:11 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:11.724937 | orchestrator | 2026-02-04 01:02:11 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:11.724946 | orchestrator | 2026-02-04 01:02:11 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:11.724953 | orchestrator | 2026-02-04 01:02:11 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state STARTED 2026-02-04 01:02:11.724959 | orchestrator | 2026-02-04 01:02:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:14.751516 | orchestrator | 2026-02-04 01:02:14 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:14.752588 | orchestrator | 2026-02-04 01:02:14 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:14.752676 | orchestrator | 2026-02-04 01:02:14 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:14.754181 | orchestrator | 2026-02-04 01:02:14 | INFO  | Task 1292025e-3ddc-42cb-956c-81108ab20401 is in state SUCCESS 2026-02-04 01:02:14.755251 | orchestrator | 2026-02-04 01:02:14.755282 | orchestrator | 2026-02-04 01:02:14.755289 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:02:14.755295 | orchestrator | 2026-02-04 01:02:14.755301 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:02:14.755328 | orchestrator | Wednesday 04 February 2026 01:01:34 +0000 (0:00:00.157) 0:00:00.157 **** 2026-02-04 01:02:14.755334 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:14.755342 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:14.755352 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:14.755358 | orchestrator | 2026-02-04 01:02:14.755368 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:02:14.755374 | orchestrator | Wednesday 04 February 2026 01:01:34 +0000 (0:00:00.273) 0:00:00.430 **** 2026-02-04 01:02:14.755380 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-04 01:02:14.755386 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-04 01:02:14.755392 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-04 01:02:14.755397 | orchestrator | 2026-02-04 01:02:14.755403 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-04 01:02:14.755409 | orchestrator | 2026-02-04 01:02:14.755806 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-04 01:02:14.755819 | orchestrator | Wednesday 04 February 2026 01:01:35 +0000 (0:00:00.701) 0:00:01.132 **** 2026-02-04 01:02:14.755878 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:14.755891 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:14.755900 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:14.755909 | orchestrator | 2026-02-04 01:02:14.755919 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:02:14.755929 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:02:14.755939 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:02:14.755949 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:02:14.755958 | orchestrator | 2026-02-04 01:02:14.756103 | orchestrator | 2026-02-04 01:02:14.756110 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:02:14.756116 | orchestrator | Wednesday 04 February 2026 01:01:36 +0000 (0:00:00.881) 0:00:02.013 **** 2026-02-04 01:02:14.756122 | orchestrator | =============================================================================== 2026-02-04 01:02:14.756127 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.88s 2026-02-04 01:02:14.756133 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-02-04 01:02:14.756138 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-02-04 01:02:14.756144 | orchestrator | 2026-02-04 01:02:14.756149 | orchestrator | 2026-02-04 01:02:14.756155 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:02:14.756160 | orchestrator | 2026-02-04 01:02:14.756166 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:02:14.756171 | orchestrator | Wednesday 04 February 2026 00:58:20 +0000 (0:00:00.353) 0:00:00.353 **** 2026-02-04 01:02:14.756177 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:14.756183 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:14.756188 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:14.756194 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:02:14.756199 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:02:14.756204 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:02:14.756210 | orchestrator | 2026-02-04 01:02:14.756216 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:02:14.756221 | orchestrator | Wednesday 04 February 2026 00:58:21 +0000 (0:00:00.890) 0:00:01.243 **** 2026-02-04 01:02:14.756230 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-04 01:02:14.756240 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-04 01:02:14.756250 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-04 01:02:14.756259 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-04 01:02:14.756269 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-04 01:02:14.756277 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-04 01:02:14.756285 | orchestrator | 2026-02-04 01:02:14.756294 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-04 01:02:14.756302 | orchestrator | 2026-02-04 01:02:14.756311 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:02:14.756321 | orchestrator | Wednesday 04 February 2026 00:58:21 +0000 (0:00:00.590) 0:00:01.833 **** 2026-02-04 01:02:14.756331 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:02:14.756341 | orchestrator | 2026-02-04 01:02:14.756350 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-04 01:02:14.756361 | orchestrator | Wednesday 04 February 2026 00:58:22 +0000 (0:00:00.989) 0:00:02.822 **** 2026-02-04 01:02:14.756381 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:14.756391 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:14.756400 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:02:14.756411 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:14.756417 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:02:14.756422 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:02:14.756428 | orchestrator | 2026-02-04 01:02:14.756434 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-04 01:02:14.756448 | orchestrator | Wednesday 04 February 2026 00:58:24 +0000 (0:00:01.195) 0:00:04.017 **** 2026-02-04 01:02:14.756454 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:14.756459 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:14.756465 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:14.756470 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:02:14.756476 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:02:14.756523 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:02:14.756537 | orchestrator | 2026-02-04 01:02:14.756547 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-04 01:02:14.756556 | orchestrator | Wednesday 04 February 2026 00:58:25 +0000 (0:00:01.066) 0:00:05.084 **** 2026-02-04 01:02:14.756565 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 01:02:14.756575 | orchestrator |  "changed": false, 2026-02-04 01:02:14.756584 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:02:14.756594 | orchestrator | } 2026-02-04 01:02:14.756604 | orchestrator | ok: [testbed-node-1] => { 2026-02-04 01:02:14.756628 | orchestrator |  "changed": false, 2026-02-04 01:02:14.756638 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:02:14.756647 | orchestrator | } 2026-02-04 01:02:14.756656 | orchestrator | ok: [testbed-node-2] => { 2026-02-04 01:02:14.756666 | orchestrator |  "changed": false, 2026-02-04 01:02:14.756675 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:02:14.756685 | orchestrator | } 2026-02-04 01:02:14.756693 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 01:02:14.756701 | orchestrator |  "changed": false, 2026-02-04 01:02:14.756710 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:02:14.756719 | orchestrator | } 2026-02-04 01:02:14.756728 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 01:02:14.756738 | orchestrator |  "changed": false, 2026-02-04 01:02:14.756748 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:02:14.756758 | orchestrator | } 2026-02-04 01:02:14.756769 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 01:02:14.756779 | orchestrator |  "changed": false, 2026-02-04 01:02:14.756788 | orchestrator |  "msg": "All assertions passed" 2026-02-04 01:02:14.756798 | orchestrator | } 2026-02-04 01:02:14.756808 | orchestrator | 2026-02-04 01:02:14.756817 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-04 01:02:14.756827 | orchestrator | Wednesday 04 February 2026 00:58:25 +0000 (0:00:00.629) 0:00:05.714 **** 2026-02-04 01:02:14.756836 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.756845 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.756855 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.756865 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.756874 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.756883 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.756893 | orchestrator | 2026-02-04 01:02:14.756903 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-04 01:02:14.756914 | orchestrator | Wednesday 04 February 2026 00:58:26 +0000 (0:00:00.496) 0:00:06.210 **** 2026-02-04 01:02:14.756924 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-04 01:02:14.756933 | orchestrator | 2026-02-04 01:02:14.756944 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-04 01:02:14.756955 | orchestrator | Wednesday 04 February 2026 00:58:29 +0000 (0:00:03.511) 0:00:09.722 **** 2026-02-04 01:02:14.756964 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-04 01:02:14.756981 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-04 01:02:14.756988 | orchestrator | 2026-02-04 01:02:14.756995 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-04 01:02:14.757002 | orchestrator | Wednesday 04 February 2026 00:58:37 +0000 (0:00:07.662) 0:00:17.385 **** 2026-02-04 01:02:14.757009 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:02:14.757016 | orchestrator | 2026-02-04 01:02:14.757022 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-04 01:02:14.757029 | orchestrator | Wednesday 04 February 2026 00:58:41 +0000 (0:00:03.537) 0:00:20.922 **** 2026-02-04 01:02:14.757036 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:02:14.757043 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-04 01:02:14.757050 | orchestrator | 2026-02-04 01:02:14.757056 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-04 01:02:14.757063 | orchestrator | Wednesday 04 February 2026 00:58:45 +0000 (0:00:04.328) 0:00:25.251 **** 2026-02-04 01:02:14.757070 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:02:14.757076 | orchestrator | 2026-02-04 01:02:14.757083 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-04 01:02:14.757090 | orchestrator | Wednesday 04 February 2026 00:58:48 +0000 (0:00:03.573) 0:00:28.824 **** 2026-02-04 01:02:14.757097 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-04 01:02:14.757102 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-04 01:02:14.757108 | orchestrator | 2026-02-04 01:02:14.757114 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:02:14.757119 | orchestrator | Wednesday 04 February 2026 00:58:56 +0000 (0:00:07.809) 0:00:36.634 **** 2026-02-04 01:02:14.757125 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.757130 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.757136 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.757154 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.757165 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.757171 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.757177 | orchestrator | 2026-02-04 01:02:14.757182 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-04 01:02:14.757188 | orchestrator | Wednesday 04 February 2026 00:58:57 +0000 (0:00:00.740) 0:00:37.375 **** 2026-02-04 01:02:14.757193 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.757199 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.757205 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.757210 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.757215 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.757221 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.757226 | orchestrator | 2026-02-04 01:02:14.757232 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-04 01:02:14.757243 | orchestrator | Wednesday 04 February 2026 00:59:00 +0000 (0:00:02.865) 0:00:40.240 **** 2026-02-04 01:02:14.757249 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:02:14.757260 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:02:14.757268 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:02:14.757277 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:02:14.757286 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:02:14.757338 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:02:14.757350 | orchestrator | 2026-02-04 01:02:14.757357 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 01:02:14.757362 | orchestrator | Wednesday 04 February 2026 00:59:01 +0000 (0:00:01.481) 0:00:41.721 **** 2026-02-04 01:02:14.757368 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.757374 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.757379 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.757390 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.757395 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.757401 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.757406 | orchestrator | 2026-02-04 01:02:14.757412 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-04 01:02:14.757417 | orchestrator | Wednesday 04 February 2026 00:59:04 +0000 (0:00:03.192) 0:00:44.914 **** 2026-02-04 01:02:14.757425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.757433 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.757440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.757449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.757476 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.757484 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.757489 | orchestrator | 2026-02-04 01:02:14.757495 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-04 01:02:14.757501 | orchestrator | Wednesday 04 February 2026 00:59:08 +0000 (0:00:03.366) 0:00:48.281 **** 2026-02-04 01:02:14.757507 | orchestrator | [WARNING]: Skipped 2026-02-04 01:02:14.757512 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-04 01:02:14.757518 | orchestrator | due to this access issue: 2026-02-04 01:02:14.757524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-04 01:02:14.757530 | orchestrator | a directory 2026-02-04 01:02:14.757536 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:02:14.757543 | orchestrator | 2026-02-04 01:02:14.757553 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:02:14.757559 | orchestrator | Wednesday 04 February 2026 00:59:09 +0000 (0:00:00.680) 0:00:48.961 **** 2026-02-04 01:02:14.757565 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:02:14.757571 | orchestrator | 2026-02-04 01:02:14.757577 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-04 01:02:14.757582 | orchestrator | Wednesday 04 February 2026 00:59:10 +0000 (0:00:01.204) 0:00:50.166 **** 2026-02-04 01:02:14.757588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.757632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.757645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.757652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.757658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.757664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.757673 | orchestrator | 2026-02-04 01:02:14.757679 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-04 01:02:14.757685 | orchestrator | Wednesday 04 February 2026 00:59:13 +0000 (0:00:03.622) 0:00:53.789 **** 2026-02-04 01:02:14.757713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.757720 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.757726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.757732 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.757738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.757744 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.757750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.757756 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.757767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.757773 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.757804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.757820 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.757831 | orchestrator | 2026-02-04 01:02:14.757840 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-04 01:02:14.757848 | orchestrator | Wednesday 04 February 2026 00:59:16 +0000 (0:00:02.836) 0:00:56.625 **** 2026-02-04 01:02:14.757858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.757867 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.757877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.757887 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.757896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.757913 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.757933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.757942 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.757948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.757954 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.757960 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.757966 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.757971 | orchestrator | 2026-02-04 01:02:14.757977 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-04 01:02:14.757983 | orchestrator | Wednesday 04 February 2026 00:59:19 +0000 (0:00:02.814) 0:00:59.440 **** 2026-02-04 01:02:14.757989 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.757994 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.758000 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758006 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758011 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758050 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758060 | orchestrator | 2026-02-04 01:02:14.758067 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-04 01:02:14.758073 | orchestrator | Wednesday 04 February 2026 00:59:22 +0000 (0:00:02.507) 0:01:01.948 **** 2026-02-04 01:02:14.758078 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758084 | orchestrator | 2026-02-04 01:02:14.758090 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-04 01:02:14.758095 | orchestrator | Wednesday 04 February 2026 00:59:22 +0000 (0:00:00.205) 0:01:02.153 **** 2026-02-04 01:02:14.758101 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758107 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.758112 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758118 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758124 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758129 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758135 | orchestrator | 2026-02-04 01:02:14.758141 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-04 01:02:14.758147 | orchestrator | Wednesday 04 February 2026 00:59:22 +0000 (0:00:00.592) 0:01:02.746 **** 2026-02-04 01:02:14.758155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.758162 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.758184 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.758193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.758209 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758228 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758234 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758241 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758277 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758286 | orchestrator | 2026-02-04 01:02:14.758295 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-04 01:02:14.758304 | orchestrator | Wednesday 04 February 2026 00:59:25 +0000 (0:00:02.772) 0:01:05.519 **** 2026-02-04 01:02:14.758314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758353 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.758373 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.758384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.758399 | orchestrator | 2026-02-04 01:02:14.758409 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-04 01:02:14.758419 | orchestrator | Wednesday 04 February 2026 00:59:29 +0000 (0:00:04.005) 0:01:09.524 **** 2026-02-04 01:02:14.758429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758438 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.758451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.758477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.758501 | orchestrator | 2026-02-04 01:02:14.758511 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-04 01:02:14.758521 | orchestrator | Wednesday 04 February 2026 00:59:35 +0000 (0:00:05.748) 0:01:15.273 **** 2026-02-04 01:02:14.758531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.758541 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758569 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.758585 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758597 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.758652 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.758660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758666 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758671 | orchestrator | 2026-02-04 01:02:14.758677 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-04 01:02:14.758683 | orchestrator | Wednesday 04 February 2026 00:59:37 +0000 (0:00:02.124) 0:01:17.398 **** 2026-02-04 01:02:14.758688 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758694 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758700 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758705 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:02:14.758714 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:14.758720 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:02:14.758725 | orchestrator | 2026-02-04 01:02:14.758731 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-04 01:02:14.758740 | orchestrator | Wednesday 04 February 2026 00:59:39 +0000 (0:00:02.430) 0:01:19.829 **** 2026-02-04 01:02:14.758746 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758759 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758771 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758777 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.758783 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.758816 | orchestrator | 2026-02-04 01:02:14.758822 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-04 01:02:14.758828 | orchestrator | Wednesday 04 February 2026 00:59:43 +0000 (0:00:03.639) 0:01:23.469 **** 2026-02-04 01:02:14.758834 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758839 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758845 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.758850 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758856 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758861 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758867 | orchestrator | 2026-02-04 01:02:14.758875 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-04 01:02:14.758885 | orchestrator | Wednesday 04 February 2026 00:59:45 +0000 (0:00:01.914) 0:01:25.384 **** 2026-02-04 01:02:14.758894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758903 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758912 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.758921 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.758931 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.758940 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.758950 | orchestrator | 2026-02-04 01:02:14.758960 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-04 01:02:14.758967 | orchestrator | Wednesday 04 February 2026 00:59:47 +0000 (0:00:01.782) 0:01:27.166 **** 2026-02-04 01:02:14.758973 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.758981 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.758990 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759000 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759008 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759017 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759026 | orchestrator | 2026-02-04 01:02:14.759035 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-04 01:02:14.759044 | orchestrator | Wednesday 04 February 2026 00:59:49 +0000 (0:00:01.780) 0:01:28.947 **** 2026-02-04 01:02:14.759053 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759062 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759071 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759080 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759085 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759091 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759096 | orchestrator | 2026-02-04 01:02:14.759101 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-04 01:02:14.759106 | orchestrator | Wednesday 04 February 2026 00:59:50 +0000 (0:00:01.706) 0:01:30.653 **** 2026-02-04 01:02:14.759112 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759121 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759127 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759132 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759137 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759142 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759147 | orchestrator | 2026-02-04 01:02:14.759152 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-04 01:02:14.759158 | orchestrator | Wednesday 04 February 2026 00:59:52 +0000 (0:00:01.854) 0:01:32.508 **** 2026-02-04 01:02:14.759163 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759168 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759173 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759178 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759183 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759188 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759193 | orchestrator | 2026-02-04 01:02:14.759199 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-04 01:02:14.759204 | orchestrator | Wednesday 04 February 2026 00:59:54 +0000 (0:00:02.039) 0:01:34.548 **** 2026-02-04 01:02:14.759209 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:02:14.759214 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759223 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:02:14.759232 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759250 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:02:14.759259 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759267 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:02:14.759281 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759291 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:02:14.759304 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759314 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 01:02:14.759323 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759330 | orchestrator | 2026-02-04 01:02:14.759339 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-04 01:02:14.759348 | orchestrator | Wednesday 04 February 2026 00:59:56 +0000 (0:00:02.285) 0:01:36.833 **** 2026-02-04 01:02:14.759357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.759367 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.759387 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.759398 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.759412 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.759433 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.759461 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759470 | orchestrator | 2026-02-04 01:02:14.759478 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-04 01:02:14.759487 | orchestrator | Wednesday 04 February 2026 00:59:58 +0000 (0:00:01.668) 0:01:38.501 **** 2026-02-04 01:02:14.759496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.759505 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.759517 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.759537 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.759552 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.759563 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.759574 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759579 | orchestrator | 2026-02-04 01:02:14.759584 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-04 01:02:14.759590 | orchestrator | Wednesday 04 February 2026 01:00:00 +0000 (0:00:01.758) 0:01:40.260 **** 2026-02-04 01:02:14.759595 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759600 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759605 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759656 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759667 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759676 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759684 | orchestrator | 2026-02-04 01:02:14.759692 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-04 01:02:14.759701 | orchestrator | Wednesday 04 February 2026 01:00:02 +0000 (0:00:01.788) 0:01:42.048 **** 2026-02-04 01:02:14.759710 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759719 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759728 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759737 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:02:14.759747 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:02:14.759752 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:02:14.759757 | orchestrator | 2026-02-04 01:02:14.759763 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-04 01:02:14.759768 | orchestrator | Wednesday 04 February 2026 01:00:05 +0000 (0:00:03.265) 0:01:45.314 **** 2026-02-04 01:02:14.759773 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759781 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759786 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759791 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759796 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759801 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759806 | orchestrator | 2026-02-04 01:02:14.759816 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-04 01:02:14.759821 | orchestrator | Wednesday 04 February 2026 01:00:07 +0000 (0:00:02.085) 0:01:47.399 **** 2026-02-04 01:02:14.759826 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759835 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759840 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759845 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759849 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759854 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759859 | orchestrator | 2026-02-04 01:02:14.759864 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-04 01:02:14.759869 | orchestrator | Wednesday 04 February 2026 01:00:09 +0000 (0:00:02.187) 0:01:49.587 **** 2026-02-04 01:02:14.759874 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759879 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759884 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759889 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759894 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759899 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759904 | orchestrator | 2026-02-04 01:02:14.759909 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-04 01:02:14.759913 | orchestrator | Wednesday 04 February 2026 01:00:11 +0000 (0:00:02.186) 0:01:51.774 **** 2026-02-04 01:02:14.759919 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.759923 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759928 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759933 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.759938 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.759943 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759949 | orchestrator | 2026-02-04 01:02:14.759957 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-04 01:02:14.759966 | orchestrator | Wednesday 04 February 2026 01:00:13 +0000 (0:00:01.707) 0:01:53.481 **** 2026-02-04 01:02:14.759974 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.759982 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.759990 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.759998 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.760007 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.760015 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.760023 | orchestrator | 2026-02-04 01:02:14.760032 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-04 01:02:14.760037 | orchestrator | Wednesday 04 February 2026 01:00:15 +0000 (0:00:01.581) 0:01:55.063 **** 2026-02-04 01:02:14.760042 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.760047 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.760052 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.760057 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.760062 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.760067 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.760072 | orchestrator | 2026-02-04 01:02:14.760078 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-04 01:02:14.760087 | orchestrator | Wednesday 04 February 2026 01:00:17 +0000 (0:00:02.010) 0:01:57.073 **** 2026-02-04 01:02:14.760095 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.760103 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.760111 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.760119 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.760127 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.760135 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.760143 | orchestrator | 2026-02-04 01:02:14.760151 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-04 01:02:14.760158 | orchestrator | Wednesday 04 February 2026 01:00:18 +0000 (0:00:01.841) 0:01:58.915 **** 2026-02-04 01:02:14.760166 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:02:14.760174 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.760182 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:02:14.760197 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.760205 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:02:14.760213 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.760221 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:02:14.760230 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.760239 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:02:14.760247 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.760256 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 01:02:14.760265 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.760273 | orchestrator | 2026-02-04 01:02:14.760282 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-04 01:02:14.760290 | orchestrator | Wednesday 04 February 2026 01:00:20 +0000 (0:00:01.867) 0:02:00.783 **** 2026-02-04 01:02:14.760308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.760319 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.760328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.760336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.760345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 01:02:14.760360 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.760370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.760379 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.760387 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.760396 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.760417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 01:02:14.760423 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.760428 | orchestrator | 2026-02-04 01:02:14.760441 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-04 01:02:14.760456 | orchestrator | Wednesday 04 February 2026 01:00:23 +0000 (0:00:02.168) 0:02:02.952 **** 2026-02-04 01:02:14.760465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.760473 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.760489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.760502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.760516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 01:02:14.760525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 01:02:14.760534 | orchestrator | 2026-02-04 01:02:14.760543 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 01:02:14.760556 | orchestrator | Wednesday 04 February 2026 01:00:26 +0000 (0:00:02.984) 0:02:05.936 **** 2026-02-04 01:02:14.760565 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:02:14.760570 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:02:14.760575 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:02:14.760580 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:02:14.760585 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:02:14.760590 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:02:14.760595 | orchestrator | 2026-02-04 01:02:14.760600 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-04 01:02:14.760605 | orchestrator | Wednesday 04 February 2026 01:00:26 +0000 (0:00:00.495) 0:02:06.432 **** 2026-02-04 01:02:14.760624 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:14.760632 | orchestrator | 2026-02-04 01:02:14.760637 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-04 01:02:14.760643 | orchestrator | Wednesday 04 February 2026 01:00:28 +0000 (0:00:01.855) 0:02:08.287 **** 2026-02-04 01:02:14.760649 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:14.760658 | orchestrator | 2026-02-04 01:02:14.760663 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-04 01:02:14.760668 | orchestrator | Wednesday 04 February 2026 01:00:30 +0000 (0:00:01.988) 0:02:10.276 **** 2026-02-04 01:02:14.760674 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:14.760682 | orchestrator | 2026-02-04 01:02:14.760695 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:02:14.760704 | orchestrator | Wednesday 04 February 2026 01:01:15 +0000 (0:00:45.238) 0:02:55.514 **** 2026-02-04 01:02:14.760712 | orchestrator | 2026-02-04 01:02:14.760720 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:02:14.760728 | orchestrator | Wednesday 04 February 2026 01:01:15 +0000 (0:00:00.063) 0:02:55.578 **** 2026-02-04 01:02:14.760735 | orchestrator | 2026-02-04 01:02:14.760742 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:02:14.760750 | orchestrator | Wednesday 04 February 2026 01:01:15 +0000 (0:00:00.221) 0:02:55.799 **** 2026-02-04 01:02:14.760758 | orchestrator | 2026-02-04 01:02:14.760766 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:02:14.760774 | orchestrator | Wednesday 04 February 2026 01:01:15 +0000 (0:00:00.064) 0:02:55.864 **** 2026-02-04 01:02:14.760783 | orchestrator | 2026-02-04 01:02:14.760791 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:02:14.760799 | orchestrator | Wednesday 04 February 2026 01:01:16 +0000 (0:00:00.068) 0:02:55.932 **** 2026-02-04 01:02:14.760807 | orchestrator | 2026-02-04 01:02:14.760816 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 01:02:14.760824 | orchestrator | Wednesday 04 February 2026 01:01:16 +0000 (0:00:00.063) 0:02:55.996 **** 2026-02-04 01:02:14.760833 | orchestrator | 2026-02-04 01:02:14.760841 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-04 01:02:14.760848 | orchestrator | Wednesday 04 February 2026 01:01:16 +0000 (0:00:00.066) 0:02:56.062 **** 2026-02-04 01:02:14.760855 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:02:14.760863 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:02:14.760870 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:02:14.760877 | orchestrator | 2026-02-04 01:02:14.760889 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-04 01:02:14.760897 | orchestrator | Wednesday 04 February 2026 01:01:33 +0000 (0:00:17.544) 0:03:13.607 **** 2026-02-04 01:02:14.760904 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:02:14.760922 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:02:14.760931 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:02:14.760939 | orchestrator | 2026-02-04 01:02:14.760948 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:02:14.760964 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:02:14.760976 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-04 01:02:14.760986 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-04 01:02:14.760994 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:02:14.761003 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:02:14.761009 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 01:02:14.761014 | orchestrator | 2026-02-04 01:02:14.761019 | orchestrator | 2026-02-04 01:02:14.761024 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:02:14.761028 | orchestrator | Wednesday 04 February 2026 01:02:13 +0000 (0:00:39.845) 0:03:53.453 **** 2026-02-04 01:02:14.761034 | orchestrator | =============================================================================== 2026-02-04 01:02:14.761040 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.24s 2026-02-04 01:02:14.761048 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 39.85s 2026-02-04 01:02:14.761057 | orchestrator | neutron : Restart neutron-server container ----------------------------- 17.54s 2026-02-04 01:02:14.761064 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.81s 2026-02-04 01:02:14.761073 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.66s 2026-02-04 01:02:14.761080 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.75s 2026-02-04 01:02:14.761088 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.33s 2026-02-04 01:02:14.761096 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.01s 2026-02-04 01:02:14.761103 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.64s 2026-02-04 01:02:14.761112 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.62s 2026-02-04 01:02:14.761121 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.57s 2026-02-04 01:02:14.761129 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.54s 2026-02-04 01:02:14.761137 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.51s 2026-02-04 01:02:14.761146 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.37s 2026-02-04 01:02:14.761155 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.27s 2026-02-04 01:02:14.761163 | orchestrator | Setting sysctl values --------------------------------------------------- 3.19s 2026-02-04 01:02:14.761172 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.98s 2026-02-04 01:02:14.761180 | orchestrator | Load and persist kernel modules ----------------------------------------- 2.87s 2026-02-04 01:02:14.761189 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 2.84s 2026-02-04 01:02:14.761197 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.81s 2026-02-04 01:02:14.761204 | orchestrator | 2026-02-04 01:02:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:17.816953 | orchestrator | 2026-02-04 01:02:17 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:17.817008 | orchestrator | 2026-02-04 01:02:17 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:17.817031 | orchestrator | 2026-02-04 01:02:17 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:17.817037 | orchestrator | 2026-02-04 01:02:17 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:17.817043 | orchestrator | 2026-02-04 01:02:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:20.819842 | orchestrator | 2026-02-04 01:02:20 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:20.823374 | orchestrator | 2026-02-04 01:02:20 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:20.828137 | orchestrator | 2026-02-04 01:02:20 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:20.828186 | orchestrator | 2026-02-04 01:02:20 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:20.828192 | orchestrator | 2026-02-04 01:02:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:23.863733 | orchestrator | 2026-02-04 01:02:23 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:23.864706 | orchestrator | 2026-02-04 01:02:23 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:23.866226 | orchestrator | 2026-02-04 01:02:23 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:23.868947 | orchestrator | 2026-02-04 01:02:23 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:23.868998 | orchestrator | 2026-02-04 01:02:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:26.899883 | orchestrator | 2026-02-04 01:02:26 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:26.900308 | orchestrator | 2026-02-04 01:02:26 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:26.901035 | orchestrator | 2026-02-04 01:02:26 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:26.901737 | orchestrator | 2026-02-04 01:02:26 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:26.901841 | orchestrator | 2026-02-04 01:02:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:29.932503 | orchestrator | 2026-02-04 01:02:29 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:29.932707 | orchestrator | 2026-02-04 01:02:29 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:29.932732 | orchestrator | 2026-02-04 01:02:29 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:29.934495 | orchestrator | 2026-02-04 01:02:29 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:29.934553 | orchestrator | 2026-02-04 01:02:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:32.984984 | orchestrator | 2026-02-04 01:02:32 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:32.987222 | orchestrator | 2026-02-04 01:02:32 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:32.989349 | orchestrator | 2026-02-04 01:02:32 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:32.991112 | orchestrator | 2026-02-04 01:02:32 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:32.991157 | orchestrator | 2026-02-04 01:02:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:36.042764 | orchestrator | 2026-02-04 01:02:36 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:36.042881 | orchestrator | 2026-02-04 01:02:36 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:36.042892 | orchestrator | 2026-02-04 01:02:36 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:36.042899 | orchestrator | 2026-02-04 01:02:36 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:36.042906 | orchestrator | 2026-02-04 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:39.074749 | orchestrator | 2026-02-04 01:02:39 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:39.080774 | orchestrator | 2026-02-04 01:02:39 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:39.081881 | orchestrator | 2026-02-04 01:02:39 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:39.083073 | orchestrator | 2026-02-04 01:02:39 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:39.083124 | orchestrator | 2026-02-04 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:42.118235 | orchestrator | 2026-02-04 01:02:42 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:42.119610 | orchestrator | 2026-02-04 01:02:42 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:42.121070 | orchestrator | 2026-02-04 01:02:42 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:42.122778 | orchestrator | 2026-02-04 01:02:42 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:42.122816 | orchestrator | 2026-02-04 01:02:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:45.156044 | orchestrator | 2026-02-04 01:02:45 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state STARTED 2026-02-04 01:02:45.156337 | orchestrator | 2026-02-04 01:02:45 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:45.159051 | orchestrator | 2026-02-04 01:02:45 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:45.159116 | orchestrator | 2026-02-04 01:02:45 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:45.159215 | orchestrator | 2026-02-04 01:02:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:48.183473 | orchestrator | 2026-02-04 01:02:48 | INFO  | Task e9788c27-e054-4685-a4cd-0ea9f2ed1c5b is in state SUCCESS 2026-02-04 01:02:48.183856 | orchestrator | 2026-02-04 01:02:48 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:48.184359 | orchestrator | 2026-02-04 01:02:48 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:48.184895 | orchestrator | 2026-02-04 01:02:48 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:48.185762 | orchestrator | 2026-02-04 01:02:48 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:02:48.185824 | orchestrator | 2026-02-04 01:02:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:51.222535 | orchestrator | 2026-02-04 01:02:51 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:51.223243 | orchestrator | 2026-02-04 01:02:51 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:51.224261 | orchestrator | 2026-02-04 01:02:51 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:51.224614 | orchestrator | 2026-02-04 01:02:51 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:02:51.224632 | orchestrator | 2026-02-04 01:02:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:54.265123 | orchestrator | 2026-02-04 01:02:54 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:54.266180 | orchestrator | 2026-02-04 01:02:54 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:54.267199 | orchestrator | 2026-02-04 01:02:54 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:54.267962 | orchestrator | 2026-02-04 01:02:54 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:02:54.268745 | orchestrator | 2026-02-04 01:02:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:02:57.305621 | orchestrator | 2026-02-04 01:02:57 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:02:57.306780 | orchestrator | 2026-02-04 01:02:57 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:02:57.309163 | orchestrator | 2026-02-04 01:02:57 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:02:57.310231 | orchestrator | 2026-02-04 01:02:57 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:02:57.310280 | orchestrator | 2026-02-04 01:02:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:00.331228 | orchestrator | 2026-02-04 01:03:00 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:00.331963 | orchestrator | 2026-02-04 01:03:00 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:00.333047 | orchestrator | 2026-02-04 01:03:00 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:03:00.334236 | orchestrator | 2026-02-04 01:03:00 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:00.334272 | orchestrator | 2026-02-04 01:03:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:03.363734 | orchestrator | 2026-02-04 01:03:03 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:03.364619 | orchestrator | 2026-02-04 01:03:03 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:03.365838 | orchestrator | 2026-02-04 01:03:03 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:03:03.366572 | orchestrator | 2026-02-04 01:03:03 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:03.366652 | orchestrator | 2026-02-04 01:03:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:06.395431 | orchestrator | 2026-02-04 01:03:06 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:06.396793 | orchestrator | 2026-02-04 01:03:06 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:06.397979 | orchestrator | 2026-02-04 01:03:06 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state STARTED 2026-02-04 01:03:06.399324 | orchestrator | 2026-02-04 01:03:06 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:06.399359 | orchestrator | 2026-02-04 01:03:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:09.441078 | orchestrator | 2026-02-04 01:03:09 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:09.441382 | orchestrator | 2026-02-04 01:03:09 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:09.442825 | orchestrator | 2026-02-04 01:03:09 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:09.445302 | orchestrator | 2026-02-04 01:03:09 | INFO  | Task 27ff0a39-8ef9-4922-9f46-3a5039d05047 is in state SUCCESS 2026-02-04 01:03:09.446380 | orchestrator | 2026-02-04 01:03:09.446423 | orchestrator | 2026-02-04 01:03:09.446432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:03:09.446440 | orchestrator | 2026-02-04 01:03:09.446446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:03:09.446452 | orchestrator | Wednesday 04 February 2026 01:02:17 +0000 (0:00:00.260) 0:00:00.260 **** 2026-02-04 01:03:09.446458 | orchestrator | ok: [testbed-manager] 2026-02-04 01:03:09.446466 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:09.446472 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:09.446478 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:09.446484 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:03:09.446490 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:03:09.446496 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:03:09.446502 | orchestrator | 2026-02-04 01:03:09.446508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:03:09.446514 | orchestrator | Wednesday 04 February 2026 01:02:18 +0000 (0:00:00.810) 0:00:01.071 **** 2026-02-04 01:03:09.446521 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446527 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446534 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446540 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446622 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446629 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446636 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-04 01:03:09.446855 | orchestrator | 2026-02-04 01:03:09.446866 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-04 01:03:09.446873 | orchestrator | 2026-02-04 01:03:09.446879 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-04 01:03:09.446885 | orchestrator | Wednesday 04 February 2026 01:02:18 +0000 (0:00:00.547) 0:00:01.619 **** 2026-02-04 01:03:09.446894 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:03:09.446901 | orchestrator | 2026-02-04 01:03:09.446907 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-04 01:03:09.446914 | orchestrator | Wednesday 04 February 2026 01:02:20 +0000 (0:00:01.457) 0:00:03.077 **** 2026-02-04 01:03:09.446921 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-04 01:03:09.446927 | orchestrator | 2026-02-04 01:03:09.446934 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-04 01:03:09.446940 | orchestrator | Wednesday 04 February 2026 01:02:23 +0000 (0:00:02.912) 0:00:05.989 **** 2026-02-04 01:03:09.446947 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-04 01:03:09.446955 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-04 01:03:09.446961 | orchestrator | 2026-02-04 01:03:09.446966 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-04 01:03:09.446972 | orchestrator | Wednesday 04 February 2026 01:02:29 +0000 (0:00:05.803) 0:00:11.792 **** 2026-02-04 01:03:09.446978 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-04 01:03:09.446983 | orchestrator | 2026-02-04 01:03:09.446989 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-04 01:03:09.447018 | orchestrator | Wednesday 04 February 2026 01:02:31 +0000 (0:00:02.689) 0:00:14.481 **** 2026-02-04 01:03:09.447024 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:03:09.447031 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-04 01:03:09.447036 | orchestrator | 2026-02-04 01:03:09.447057 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-04 01:03:09.447063 | orchestrator | Wednesday 04 February 2026 01:02:35 +0000 (0:00:03.339) 0:00:17.821 **** 2026-02-04 01:03:09.447069 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-04 01:03:09.447075 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-04 01:03:09.447081 | orchestrator | 2026-02-04 01:03:09.447088 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-04 01:03:09.447094 | orchestrator | Wednesday 04 February 2026 01:02:40 +0000 (0:00:05.279) 0:00:23.100 **** 2026-02-04 01:03:09.447100 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-04 01:03:09.447107 | orchestrator | 2026-02-04 01:03:09.447113 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:03:09.447119 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447127 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447133 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447139 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447145 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447162 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447169 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:03:09.447175 | orchestrator | 2026-02-04 01:03:09.447181 | orchestrator | 2026-02-04 01:03:09.447188 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:03:09.447194 | orchestrator | Wednesday 04 February 2026 01:02:45 +0000 (0:00:04.739) 0:00:27.839 **** 2026-02-04 01:03:09.447200 | orchestrator | =============================================================================== 2026-02-04 01:03:09.447206 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.80s 2026-02-04 01:03:09.447211 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.28s 2026-02-04 01:03:09.447217 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.74s 2026-02-04 01:03:09.447223 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.34s 2026-02-04 01:03:09.447229 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 2.91s 2026-02-04 01:03:09.447234 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.69s 2026-02-04 01:03:09.447240 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.46s 2026-02-04 01:03:09.447246 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2026-02-04 01:03:09.447252 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-02-04 01:03:09.447257 | orchestrator | 2026-02-04 01:03:09.447263 | orchestrator | 2026-02-04 01:03:09.447269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:03:09.447275 | orchestrator | 2026-02-04 01:03:09.447281 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:03:09.447294 | orchestrator | Wednesday 04 February 2026 01:01:20 +0000 (0:00:00.255) 0:00:00.255 **** 2026-02-04 01:03:09.447300 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:09.447307 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:09.447312 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:09.447318 | orchestrator | 2026-02-04 01:03:09.447325 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:03:09.447329 | orchestrator | Wednesday 04 February 2026 01:01:21 +0000 (0:00:00.312) 0:00:00.567 **** 2026-02-04 01:03:09.447333 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-04 01:03:09.447337 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-04 01:03:09.447341 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-04 01:03:09.447345 | orchestrator | 2026-02-04 01:03:09.447349 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-04 01:03:09.447353 | orchestrator | 2026-02-04 01:03:09.447357 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 01:03:09.447361 | orchestrator | Wednesday 04 February 2026 01:01:21 +0000 (0:00:00.410) 0:00:00.978 **** 2026-02-04 01:03:09.447365 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:03:09.447369 | orchestrator | 2026-02-04 01:03:09.447373 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-04 01:03:09.447377 | orchestrator | Wednesday 04 February 2026 01:01:22 +0000 (0:00:00.457) 0:00:01.435 **** 2026-02-04 01:03:09.447381 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-04 01:03:09.447385 | orchestrator | 2026-02-04 01:03:09.447389 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-04 01:03:09.447406 | orchestrator | Wednesday 04 February 2026 01:01:25 +0000 (0:00:03.433) 0:00:04.869 **** 2026-02-04 01:03:09.447420 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-04 01:03:09.447432 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-04 01:03:09.447438 | orchestrator | 2026-02-04 01:03:09.447444 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-04 01:03:09.447451 | orchestrator | Wednesday 04 February 2026 01:01:31 +0000 (0:00:06.413) 0:00:11.282 **** 2026-02-04 01:03:09.447457 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:03:09.447465 | orchestrator | 2026-02-04 01:03:09.447472 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-04 01:03:09.447479 | orchestrator | Wednesday 04 February 2026 01:01:35 +0000 (0:00:03.396) 0:00:14.679 **** 2026-02-04 01:03:09.447486 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:03:09.447493 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-04 01:03:09.447499 | orchestrator | 2026-02-04 01:03:09.447506 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-04 01:03:09.447514 | orchestrator | Wednesday 04 February 2026 01:01:39 +0000 (0:00:04.070) 0:00:18.749 **** 2026-02-04 01:03:09.447519 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:03:09.447523 | orchestrator | 2026-02-04 01:03:09.447528 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-04 01:03:09.447533 | orchestrator | Wednesday 04 February 2026 01:01:42 +0000 (0:00:03.256) 0:00:22.006 **** 2026-02-04 01:03:09.447538 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-04 01:03:09.447542 | orchestrator | 2026-02-04 01:03:09.447547 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-04 01:03:09.447551 | orchestrator | Wednesday 04 February 2026 01:01:46 +0000 (0:00:03.578) 0:00:25.584 **** 2026-02-04 01:03:09.447556 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.447560 | orchestrator | 2026-02-04 01:03:09.447570 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-04 01:03:09.447582 | orchestrator | Wednesday 04 February 2026 01:01:49 +0000 (0:00:03.400) 0:00:28.985 **** 2026-02-04 01:03:09.447587 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.447592 | orchestrator | 2026-02-04 01:03:09.447597 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-04 01:03:09.447602 | orchestrator | Wednesday 04 February 2026 01:01:53 +0000 (0:00:04.062) 0:00:33.048 **** 2026-02-04 01:03:09.447606 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.447611 | orchestrator | 2026-02-04 01:03:09.447616 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-04 01:03:09.447620 | orchestrator | Wednesday 04 February 2026 01:01:57 +0000 (0:00:03.540) 0:00:36.588 **** 2026-02-04 01:03:09.447627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.447636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.447644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.447650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.447664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.447669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.447674 | orchestrator | 2026-02-04 01:03:09.447678 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-04 01:03:09.447683 | orchestrator | Wednesday 04 February 2026 01:01:58 +0000 (0:00:01.503) 0:00:38.092 **** 2026-02-04 01:03:09.447688 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:09.447692 | orchestrator | 2026-02-04 01:03:09.447697 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-04 01:03:09.447743 | orchestrator | Wednesday 04 February 2026 01:01:58 +0000 (0:00:00.135) 0:00:38.228 **** 2026-02-04 01:03:09.447754 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:09.447760 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:09.447765 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:09.447771 | orchestrator | 2026-02-04 01:03:09.447777 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-04 01:03:09.447783 | orchestrator | Wednesday 04 February 2026 01:01:59 +0000 (0:00:00.482) 0:00:38.711 **** 2026-02-04 01:03:09.447789 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:03:09.447795 | orchestrator | 2026-02-04 01:03:09.447801 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-04 01:03:09.447807 | orchestrator | Wednesday 04 February 2026 01:02:00 +0000 (0:00:00.836) 0:00:39.547 **** 2026-02-04 01:03:09.447875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.447892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.447906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.447913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.447920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.447927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.447935 | orchestrator | 2026-02-04 01:03:09.447939 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-04 01:03:09.447943 | orchestrator | Wednesday 04 February 2026 01:02:02 +0000 (0:00:02.466) 0:00:42.014 **** 2026-02-04 01:03:09.447947 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:03:09.447951 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:03:09.447955 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:03:09.447959 | orchestrator | 2026-02-04 01:03:09.447963 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 01:03:09.447967 | orchestrator | Wednesday 04 February 2026 01:02:03 +0000 (0:00:00.801) 0:00:42.816 **** 2026-02-04 01:03:09.447971 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:03:09.447975 | orchestrator | 2026-02-04 01:03:09.447979 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-04 01:03:09.447982 | orchestrator | Wednesday 04 February 2026 01:02:05 +0000 (0:00:02.043) 0:00:44.859 **** 2026-02-04 01:03:09.448022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448059 | orchestrator | 2026-02-04 01:03:09.448063 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-04 01:03:09.448067 | orchestrator | Wednesday 04 February 2026 01:02:08 +0000 (0:00:02.957) 0:00:47.817 **** 2026-02-04 01:03:09.448071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448084 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:09.448091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448103 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:09.448107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448115 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:09.448119 | orchestrator | 2026-02-04 01:03:09.448123 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-04 01:03:09.448130 | orchestrator | Wednesday 04 February 2026 01:02:09 +0000 (0:00:01.167) 0:00:48.984 **** 2026-02-04 01:03:09.448137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448145 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:09.448154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448162 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:09.448166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448180 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:09.448184 | orchestrator | 2026-02-04 01:03:09.448188 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-04 01:03:09.448192 | orchestrator | Wednesday 04 February 2026 01:02:11 +0000 (0:00:02.174) 0:00:51.159 **** 2026-02-04 01:03:09.448196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448261 | orchestrator | 2026-02-04 01:03:09.448268 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-04 01:03:09.448272 | orchestrator | Wednesday 04 February 2026 01:02:14 +0000 (0:00:02.627) 0:00:53.786 **** 2026-02-04 01:03:09.448276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448310 | orchestrator | 2026-02-04 01:03:09.448317 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-04 01:03:09.448321 | orchestrator | Wednesday 04 February 2026 01:02:19 +0000 (0:00:04.642) 0:00:58.429 **** 2026-02-04 01:03:09.448325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:09.448340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448354 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:09.448358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 01:03:09.448366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:03:09.448371 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:09.448375 | orchestrator | 2026-02-04 01:03:09.448379 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-04 01:03:09.448382 | orchestrator | Wednesday 04 February 2026 01:02:19 +0000 (0:00:00.596) 0:00:59.025 **** 2026-02-04 01:03:09.448389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 01:03:09.448410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:03:09.448438 | orchestrator | 2026-02-04 01:03:09.448444 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 01:03:09.448450 | orchestrator | Wednesday 04 February 2026 01:02:21 +0000 (0:00:02.017) 0:01:01.042 **** 2026-02-04 01:03:09.448456 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:03:09.448463 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:03:09.448470 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:03:09.448477 | orchestrator | 2026-02-04 01:03:09.448483 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-04 01:03:09.448491 | orchestrator | Wednesday 04 February 2026 01:02:21 +0000 (0:00:00.217) 0:01:01.259 **** 2026-02-04 01:03:09.448495 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.448499 | orchestrator | 2026-02-04 01:03:09.448503 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-04 01:03:09.448507 | orchestrator | Wednesday 04 February 2026 01:02:24 +0000 (0:00:02.221) 0:01:03.481 **** 2026-02-04 01:03:09.448511 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.448515 | orchestrator | 2026-02-04 01:03:09.448519 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-04 01:03:09.448526 | orchestrator | Wednesday 04 February 2026 01:02:26 +0000 (0:00:02.328) 0:01:05.809 **** 2026-02-04 01:03:09.448534 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.448538 | orchestrator | 2026-02-04 01:03:09.448542 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 01:03:09.448546 | orchestrator | Wednesday 04 February 2026 01:02:44 +0000 (0:00:17.616) 0:01:23.426 **** 2026-02-04 01:03:09.448550 | orchestrator | 2026-02-04 01:03:09.448554 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 01:03:09.448558 | orchestrator | Wednesday 04 February 2026 01:02:44 +0000 (0:00:00.113) 0:01:23.540 **** 2026-02-04 01:03:09.448562 | orchestrator | 2026-02-04 01:03:09.448566 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 01:03:09.448569 | orchestrator | Wednesday 04 February 2026 01:02:44 +0000 (0:00:00.086) 0:01:23.626 **** 2026-02-04 01:03:09.448573 | orchestrator | 2026-02-04 01:03:09.448577 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-04 01:03:09.448581 | orchestrator | Wednesday 04 February 2026 01:02:44 +0000 (0:00:00.064) 0:01:23.690 **** 2026-02-04 01:03:09.448585 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.448589 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:03:09.448593 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:03:09.448597 | orchestrator | 2026-02-04 01:03:09.448601 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-04 01:03:09.448604 | orchestrator | Wednesday 04 February 2026 01:02:57 +0000 (0:00:12.900) 0:01:36.590 **** 2026-02-04 01:03:09.448608 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:03:09.448612 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:03:09.448616 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:03:09.448620 | orchestrator | 2026-02-04 01:03:09.448624 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:03:09.448647 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:03:09.448652 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:03:09.448656 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:03:09.448660 | orchestrator | 2026-02-04 01:03:09.448664 | orchestrator | 2026-02-04 01:03:09.448667 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:03:09.448671 | orchestrator | Wednesday 04 February 2026 01:03:06 +0000 (0:00:09.213) 0:01:45.804 **** 2026-02-04 01:03:09.448675 | orchestrator | =============================================================================== 2026-02-04 01:03:09.448679 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.62s 2026-02-04 01:03:09.448683 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.90s 2026-02-04 01:03:09.448688 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.21s 2026-02-04 01:03:09.448692 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.41s 2026-02-04 01:03:09.448695 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.64s 2026-02-04 01:03:09.448699 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.07s 2026-02-04 01:03:09.448733 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.06s 2026-02-04 01:03:09.448739 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.58s 2026-02-04 01:03:09.448746 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.54s 2026-02-04 01:03:09.448752 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.43s 2026-02-04 01:03:09.448765 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.40s 2026-02-04 01:03:09.448780 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.40s 2026-02-04 01:03:09.448785 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.26s 2026-02-04 01:03:09.448818 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.95s 2026-02-04 01:03:09.448826 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.63s 2026-02-04 01:03:09.448832 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.47s 2026-02-04 01:03:09.448837 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.33s 2026-02-04 01:03:09.448844 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.22s 2026-02-04 01:03:09.448849 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.17s 2026-02-04 01:03:09.448855 | orchestrator | magnum : include_tasks -------------------------------------------------- 2.05s 2026-02-04 01:03:09.448862 | orchestrator | 2026-02-04 01:03:09 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:09.448868 | orchestrator | 2026-02-04 01:03:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:12.489847 | orchestrator | 2026-02-04 01:03:12 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:12.490981 | orchestrator | 2026-02-04 01:03:12 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:12.491825 | orchestrator | 2026-02-04 01:03:12 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:12.496285 | orchestrator | 2026-02-04 01:03:12 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:12.496362 | orchestrator | 2026-02-04 01:03:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:15.526824 | orchestrator | 2026-02-04 01:03:15 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:15.526914 | orchestrator | 2026-02-04 01:03:15 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:15.527543 | orchestrator | 2026-02-04 01:03:15 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:15.528188 | orchestrator | 2026-02-04 01:03:15 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:15.528228 | orchestrator | 2026-02-04 01:03:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:18.585385 | orchestrator | 2026-02-04 01:03:18 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:18.585954 | orchestrator | 2026-02-04 01:03:18 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:18.586092 | orchestrator | 2026-02-04 01:03:18 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:18.586747 | orchestrator | 2026-02-04 01:03:18 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:18.586789 | orchestrator | 2026-02-04 01:03:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:21.612100 | orchestrator | 2026-02-04 01:03:21 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:21.612187 | orchestrator | 2026-02-04 01:03:21 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:21.613273 | orchestrator | 2026-02-04 01:03:21 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:21.613819 | orchestrator | 2026-02-04 01:03:21 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:21.613840 | orchestrator | 2026-02-04 01:03:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:24.654056 | orchestrator | 2026-02-04 01:03:24 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:24.655774 | orchestrator | 2026-02-04 01:03:24 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:24.656923 | orchestrator | 2026-02-04 01:03:24 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:24.658125 | orchestrator | 2026-02-04 01:03:24 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:24.658279 | orchestrator | 2026-02-04 01:03:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:27.710201 | orchestrator | 2026-02-04 01:03:27 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:27.712239 | orchestrator | 2026-02-04 01:03:27 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:27.714010 | orchestrator | 2026-02-04 01:03:27 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:27.716257 | orchestrator | 2026-02-04 01:03:27 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:27.716310 | orchestrator | 2026-02-04 01:03:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:30.751657 | orchestrator | 2026-02-04 01:03:30 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:30.754703 | orchestrator | 2026-02-04 01:03:30 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:30.762501 | orchestrator | 2026-02-04 01:03:30 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:30.763844 | orchestrator | 2026-02-04 01:03:30 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:30.763872 | orchestrator | 2026-02-04 01:03:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:33.804140 | orchestrator | 2026-02-04 01:03:33 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:33.804304 | orchestrator | 2026-02-04 01:03:33 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:33.805573 | orchestrator | 2026-02-04 01:03:33 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:33.806306 | orchestrator | 2026-02-04 01:03:33 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:33.806359 | orchestrator | 2026-02-04 01:03:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:36.843104 | orchestrator | 2026-02-04 01:03:36 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:36.843312 | orchestrator | 2026-02-04 01:03:36 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:36.844067 | orchestrator | 2026-02-04 01:03:36 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:36.844618 | orchestrator | 2026-02-04 01:03:36 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:36.844647 | orchestrator | 2026-02-04 01:03:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:39.870063 | orchestrator | 2026-02-04 01:03:39 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:39.870131 | orchestrator | 2026-02-04 01:03:39 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:39.870813 | orchestrator | 2026-02-04 01:03:39 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:39.871391 | orchestrator | 2026-02-04 01:03:39 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:39.871587 | orchestrator | 2026-02-04 01:03:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:42.898694 | orchestrator | 2026-02-04 01:03:42 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:42.898808 | orchestrator | 2026-02-04 01:03:42 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:42.899509 | orchestrator | 2026-02-04 01:03:42 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:42.900164 | orchestrator | 2026-02-04 01:03:42 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:42.900179 | orchestrator | 2026-02-04 01:03:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:45.964821 | orchestrator | 2026-02-04 01:03:45 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:45.964896 | orchestrator | 2026-02-04 01:03:45 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:45.964902 | orchestrator | 2026-02-04 01:03:45 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:45.964907 | orchestrator | 2026-02-04 01:03:45 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:45.964912 | orchestrator | 2026-02-04 01:03:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:48.986716 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:48.987191 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:48.988518 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:48.989784 | orchestrator | 2026-02-04 01:03:48 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:48.989827 | orchestrator | 2026-02-04 01:03:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:52.016244 | orchestrator | 2026-02-04 01:03:52 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:52.016293 | orchestrator | 2026-02-04 01:03:52 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:52.019606 | orchestrator | 2026-02-04 01:03:52 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:52.020020 | orchestrator | 2026-02-04 01:03:52 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:52.020050 | orchestrator | 2026-02-04 01:03:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:55.040410 | orchestrator | 2026-02-04 01:03:55 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:55.041009 | orchestrator | 2026-02-04 01:03:55 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:55.042900 | orchestrator | 2026-02-04 01:03:55 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:55.043456 | orchestrator | 2026-02-04 01:03:55 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:55.043535 | orchestrator | 2026-02-04 01:03:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:03:58.064880 | orchestrator | 2026-02-04 01:03:58 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:03:58.066471 | orchestrator | 2026-02-04 01:03:58 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:03:58.067032 | orchestrator | 2026-02-04 01:03:58 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:03:58.067513 | orchestrator | 2026-02-04 01:03:58 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:03:58.067543 | orchestrator | 2026-02-04 01:03:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:01.098115 | orchestrator | 2026-02-04 01:04:01 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:01.098204 | orchestrator | 2026-02-04 01:04:01 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:01.100034 | orchestrator | 2026-02-04 01:04:01 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:01.100479 | orchestrator | 2026-02-04 01:04:01 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:01.100502 | orchestrator | 2026-02-04 01:04:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:04.134296 | orchestrator | 2026-02-04 01:04:04 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:04.135539 | orchestrator | 2026-02-04 01:04:04 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:04.137146 | orchestrator | 2026-02-04 01:04:04 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:04.139086 | orchestrator | 2026-02-04 01:04:04 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:04.139151 | orchestrator | 2026-02-04 01:04:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:07.163127 | orchestrator | 2026-02-04 01:04:07 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:07.163319 | orchestrator | 2026-02-04 01:04:07 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:07.164094 | orchestrator | 2026-02-04 01:04:07 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:07.164549 | orchestrator | 2026-02-04 01:04:07 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:07.164567 | orchestrator | 2026-02-04 01:04:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:10.218108 | orchestrator | 2026-02-04 01:04:10 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:10.218269 | orchestrator | 2026-02-04 01:04:10 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:10.218848 | orchestrator | 2026-02-04 01:04:10 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:10.219284 | orchestrator | 2026-02-04 01:04:10 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:10.219298 | orchestrator | 2026-02-04 01:04:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:13.247327 | orchestrator | 2026-02-04 01:04:13 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:13.247416 | orchestrator | 2026-02-04 01:04:13 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:13.248544 | orchestrator | 2026-02-04 01:04:13 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:13.248767 | orchestrator | 2026-02-04 01:04:13 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:13.248905 | orchestrator | 2026-02-04 01:04:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:16.302006 | orchestrator | 2026-02-04 01:04:16 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:16.302129 | orchestrator | 2026-02-04 01:04:16 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:16.302362 | orchestrator | 2026-02-04 01:04:16 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:16.303724 | orchestrator | 2026-02-04 01:04:16 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:16.304090 | orchestrator | 2026-02-04 01:04:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:19.320401 | orchestrator | 2026-02-04 01:04:19 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:19.320498 | orchestrator | 2026-02-04 01:04:19 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:19.321144 | orchestrator | 2026-02-04 01:04:19 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:19.321660 | orchestrator | 2026-02-04 01:04:19 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:19.321683 | orchestrator | 2026-02-04 01:04:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:22.341526 | orchestrator | 2026-02-04 01:04:22 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:22.341575 | orchestrator | 2026-02-04 01:04:22 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:22.342191 | orchestrator | 2026-02-04 01:04:22 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:22.342519 | orchestrator | 2026-02-04 01:04:22 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:22.342648 | orchestrator | 2026-02-04 01:04:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:25.367195 | orchestrator | 2026-02-04 01:04:25 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:25.367281 | orchestrator | 2026-02-04 01:04:25 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:25.367887 | orchestrator | 2026-02-04 01:04:25 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:25.369204 | orchestrator | 2026-02-04 01:04:25 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:25.369243 | orchestrator | 2026-02-04 01:04:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:28.416095 | orchestrator | 2026-02-04 01:04:28 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:28.418191 | orchestrator | 2026-02-04 01:04:28 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state STARTED 2026-02-04 01:04:28.419418 | orchestrator | 2026-02-04 01:04:28 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:28.421012 | orchestrator | 2026-02-04 01:04:28 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:28.421046 | orchestrator | 2026-02-04 01:04:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:31.458057 | orchestrator | 2026-02-04 01:04:31 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:31.460744 | orchestrator | 2026-02-04 01:04:31 | INFO  | Task 7e589528-420e-4ab0-af4a-7149ae3bb78e is in state SUCCESS 2026-02-04 01:04:31.461637 | orchestrator | 2026-02-04 01:04:31.461673 | orchestrator | 2026-02-04 01:04:31.461684 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:04:31.461707 | orchestrator | 2026-02-04 01:04:31.461714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:04:31.461728 | orchestrator | Wednesday 04 February 2026 01:01:40 +0000 (0:00:00.255) 0:00:00.255 **** 2026-02-04 01:04:31.461735 | orchestrator | ok: [testbed-manager] 2026-02-04 01:04:31.461741 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:04:31.461747 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:04:31.461753 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:04:31.461759 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:04:31.461765 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:04:31.461771 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:04:31.461776 | orchestrator | 2026-02-04 01:04:31.461783 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:04:31.461789 | orchestrator | Wednesday 04 February 2026 01:01:40 +0000 (0:00:00.899) 0:00:01.154 **** 2026-02-04 01:04:31.461795 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461801 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461807 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461813 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461894 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461904 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461910 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-04 01:04:31.461916 | orchestrator | 2026-02-04 01:04:31.461922 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-04 01:04:31.461928 | orchestrator | 2026-02-04 01:04:31.461934 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-04 01:04:31.462179 | orchestrator | Wednesday 04 February 2026 01:01:41 +0000 (0:00:00.595) 0:00:01.750 **** 2026-02-04 01:04:31.462192 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:04:31.462199 | orchestrator | 2026-02-04 01:04:31.462205 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-04 01:04:31.462211 | orchestrator | Wednesday 04 February 2026 01:01:42 +0000 (0:00:01.217) 0:00:02.967 **** 2026-02-04 01:04:31.462219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462312 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:04:31.462382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462407 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462457 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.462603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462675 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462730 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:04:31.462738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.462767 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.462867 | orchestrator | 2026-02-04 01:04:31.462873 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-04 01:04:31.462880 | orchestrator | Wednesday 04 February 2026 01:01:45 +0000 (0:00:02.623) 0:00:05.591 **** 2026-02-04 01:04:31.463307 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:04:31.463316 | orchestrator | 2026-02-04 01:04:31.463352 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-04 01:04:31.463357 | orchestrator | Wednesday 04 February 2026 01:01:46 +0000 (0:00:01.179) 0:00:06.770 **** 2026-02-04 01:04:31.463363 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:04:31.463372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463568 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463575 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.463588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463663 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463685 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:04:31.463765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.463805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.463845 | orchestrator | 2026-02-04 01:04:31.463851 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-04 01:04:31.463858 | orchestrator | Wednesday 04 February 2026 01:01:51 +0000 (0:00:05.160) 0:00:11.931 **** 2026-02-04 01:04:31.463864 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 01:04:31.463870 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.463877 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.463901 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 01:04:31.463908 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.463914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.463926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.463932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.463939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.463946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.463953 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.463977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.463985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464155 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.464162 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.464168 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.464174 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464192 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.464199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464237 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.464243 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464266 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.464272 | orchestrator | 2026-02-04 01:04:31.464278 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-04 01:04:31.464284 | orchestrator | Wednesday 04 February 2026 01:01:53 +0000 (0:00:01.424) 0:00:13.356 **** 2026-02-04 01:04:31.464290 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 01:04:31.464296 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464303 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 01:04:31.464351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464405 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.464412 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.464420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464452 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.464457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 01:04:31.464514 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.464520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464527 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464540 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.464546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464556 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464592 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.464599 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 01:04:31.464603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 01:04:31.464611 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.464615 | orchestrator | 2026-02-04 01:04:31.464619 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-04 01:04:31.464623 | orchestrator | Wednesday 04 February 2026 01:01:55 +0000 (0:00:01.906) 0:00:15.263 **** 2026-02-04 01:04:31.464627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:04:31.464631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464673 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.464678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464706 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464710 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464719 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464760 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464764 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:04:31.464769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.464796 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.464813 | orchestrator | 2026-02-04 01:04:31.464875 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-04 01:04:31.464903 | orchestrator | Wednesday 04 February 2026 01:02:00 +0000 (0:00:05.579) 0:00:20.843 **** 2026-02-04 01:04:31.464910 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:04:31.464920 | orchestrator | 2026-02-04 01:04:31.464924 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-04 01:04:31.464928 | orchestrator | Wednesday 04 February 2026 01:02:01 +0000 (0:00:01.264) 0:00:22.107 **** 2026-02-04 01:04:31.464933 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464937 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464967 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464976 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464980 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464984 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.464988 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464995 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.464999 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465014 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1084367, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3595943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465022 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465026 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465030 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465037 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465041 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465045 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465060 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465066 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465071 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465075 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465116 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465121 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465125 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465142 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1084409, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3634694, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465149 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465154 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465158 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465165 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465169 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465174 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465189 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465195 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465200 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465204 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465211 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465216 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465220 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465224 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465240 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465246 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465253 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465258 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465262 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465266 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465271 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465287 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465292 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1084357, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.358845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465299 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465303 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465307 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465311 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465316 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465332 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465337 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465344 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465349 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465353 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465357 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1084389, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362266, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465361 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465378 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465388 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465392 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465397 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465401 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465405 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465409 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465427 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465435 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465439 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465443 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465448 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1084352, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3569136, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465452 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465456 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465472 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465479 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465483 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465487 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465492 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465496 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465500 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465509 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465520 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465525 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465529 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465533 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465537 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465542 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465556 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465561 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1084372, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3598323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465565 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465573 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465581 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465601 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465605 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465609 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465613 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465617 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465621 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465632 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465637 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465641 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465645 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.465649 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465654 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465658 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465665 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.465669 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465673 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.465681 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465685 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1084386, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3616943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465690 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465694 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465698 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465702 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.465706 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465714 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.465718 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 01:04:31.465722 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.465730 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1084375, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3600621, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465734 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1084364, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3592856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465739 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084405, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.362991, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465743 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084345, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3561172, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465747 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1084439, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.365556, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465754 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1084400, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3626888, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465758 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1084354, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.357715, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465766 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1084347, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3565068, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465771 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1084384, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3611684, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465775 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1084376, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.360294, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465779 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1084432, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3651783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:04:31.465783 | orchestrator | 2026-02-04 01:04:31.465787 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-04 01:04:31.465791 | orchestrator | Wednesday 04 February 2026 01:02:24 +0000 (0:00:22.906) 0:00:45.013 **** 2026-02-04 01:04:31.465795 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:04:31.465802 | orchestrator | 2026-02-04 01:04:31.465806 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-04 01:04:31.465810 | orchestrator | Wednesday 04 February 2026 01:02:25 +0000 (0:00:00.620) 0:00:45.634 **** 2026-02-04 01:04:31.465814 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.465831 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465839 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.465845 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465851 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.465858 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.465865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465871 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.465878 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465883 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.465888 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.465892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465898 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.465906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465914 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.465920 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.465926 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465932 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.465938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.465945 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.465950 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.465956 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.466050 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.466060 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.466069 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.466074 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.466078 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.466085 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.466089 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.466093 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.466097 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.466101 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.466105 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-04 01:04:31.466109 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 01:04:31.466113 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-04 01:04:31.466117 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:04:31.466121 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 01:04:31.466124 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:04:31.466128 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:04:31.466132 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 01:04:31.466136 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:04:31.466140 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:04:31.466148 | orchestrator | 2026-02-04 01:04:31.466152 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-04 01:04:31.466156 | orchestrator | Wednesday 04 February 2026 01:02:26 +0000 (0:00:01.557) 0:00:47.191 **** 2026-02-04 01:04:31.466160 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:04:31.466165 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466169 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:04:31.466173 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466177 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:04:31.466181 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466185 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:04:31.466189 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466193 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:04:31.466197 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466201 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 01:04:31.466205 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466209 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-04 01:04:31.466213 | orchestrator | 2026-02-04 01:04:31.466217 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-04 01:04:31.466221 | orchestrator | Wednesday 04 February 2026 01:02:40 +0000 (0:00:13.435) 0:01:00.626 **** 2026-02-04 01:04:31.466225 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:04:31.466229 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466233 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:04:31.466237 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466241 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:04:31.466245 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466249 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:04:31.466253 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466257 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:04:31.466261 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466265 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 01:04:31.466269 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466273 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-04 01:04:31.466277 | orchestrator | 2026-02-04 01:04:31.466281 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-04 01:04:31.466285 | orchestrator | Wednesday 04 February 2026 01:02:43 +0000 (0:00:02.669) 0:01:03.296 **** 2026-02-04 01:04:31.466289 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:04:31.466294 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:04:31.466298 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466302 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466306 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:04:31.466310 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466317 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-04 01:04:31.466324 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:04:31.466329 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466334 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:04:31.466339 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466342 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 01:04:31.466346 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466350 | orchestrator | 2026-02-04 01:04:31.466354 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-04 01:04:31.466358 | orchestrator | Wednesday 04 February 2026 01:02:44 +0000 (0:00:01.805) 0:01:05.101 **** 2026-02-04 01:04:31.466362 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:04:31.466366 | orchestrator | 2026-02-04 01:04:31.466370 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-04 01:04:31.466374 | orchestrator | Wednesday 04 February 2026 01:02:46 +0000 (0:00:01.133) 0:01:06.235 **** 2026-02-04 01:04:31.466378 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.466382 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466386 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466390 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466394 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466398 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466402 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466405 | orchestrator | 2026-02-04 01:04:31.466409 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-04 01:04:31.466413 | orchestrator | Wednesday 04 February 2026 01:02:46 +0000 (0:00:00.838) 0:01:07.073 **** 2026-02-04 01:04:31.466420 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.466427 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466433 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466440 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466446 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:31.466456 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:31.466463 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:31.466469 | orchestrator | 2026-02-04 01:04:31.466476 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-04 01:04:31.466483 | orchestrator | Wednesday 04 February 2026 01:02:48 +0000 (0:00:02.097) 0:01:09.171 **** 2026-02-04 01:04:31.466490 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466496 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466502 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466510 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.466514 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466518 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466522 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466526 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466529 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466533 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466537 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466541 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466548 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 01:04:31.466552 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466556 | orchestrator | 2026-02-04 01:04:31.466560 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-04 01:04:31.466564 | orchestrator | Wednesday 04 February 2026 01:02:50 +0000 (0:00:01.324) 0:01:10.496 **** 2026-02-04 01:04:31.466568 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:04:31.466572 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466576 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:04:31.466580 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466584 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:04:31.466588 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466591 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:04:31.466595 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:04:31.466599 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466603 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466607 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-04 01:04:31.466611 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 01:04:31.466622 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466631 | orchestrator | 2026-02-04 01:04:31.466635 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-04 01:04:31.466642 | orchestrator | Wednesday 04 February 2026 01:02:51 +0000 (0:00:01.314) 0:01:11.811 **** 2026-02-04 01:04:31.466646 | orchestrator | [WARNING]: Skipped 2026-02-04 01:04:31.466650 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-04 01:04:31.466662 | orchestrator | due to this access issue: 2026-02-04 01:04:31.466667 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-04 01:04:31.466672 | orchestrator | not a directory 2026-02-04 01:04:31.466681 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:04:31.466685 | orchestrator | 2026-02-04 01:04:31.466690 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-04 01:04:31.466695 | orchestrator | Wednesday 04 February 2026 01:02:52 +0000 (0:00:00.891) 0:01:12.702 **** 2026-02-04 01:04:31.466700 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.466704 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466709 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466715 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466722 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466729 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466736 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466742 | orchestrator | 2026-02-04 01:04:31.466749 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-04 01:04:31.466756 | orchestrator | Wednesday 04 February 2026 01:02:53 +0000 (0:00:00.620) 0:01:13.323 **** 2026-02-04 01:04:31.466763 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.466771 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:04:31.466777 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:04:31.466784 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:04:31.466789 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:04:31.466793 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:04:31.466798 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:04:31.466803 | orchestrator | 2026-02-04 01:04:31.466812 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-04 01:04:31.466816 | orchestrator | Wednesday 04 February 2026 01:02:53 +0000 (0:00:00.683) 0:01:14.007 **** 2026-02-04 01:04:31.466849 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 01:04:31.466854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466871 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466878 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466882 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 01:04:31.466893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466897 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466934 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 01:04:31.466939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466947 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466960 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.466990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 01:04:31.466996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.467003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 01:04:31.467007 | orchestrator | 2026-02-04 01:04:31.467011 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-04 01:04:31.467015 | orchestrator | Wednesday 04 February 2026 01:02:58 +0000 (0:00:04.375) 0:01:18.382 **** 2026-02-04 01:04:31.467019 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 01:04:31.467023 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:04:31.467027 | orchestrator | 2026-02-04 01:04:31.467030 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467034 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:01.053) 0:01:19.436 **** 2026-02-04 01:04:31.467038 | orchestrator | 2026-02-04 01:04:31.467042 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467046 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.063) 0:01:19.499 **** 2026-02-04 01:04:31.467050 | orchestrator | 2026-02-04 01:04:31.467054 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467058 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.066) 0:01:19.566 **** 2026-02-04 01:04:31.467062 | orchestrator | 2026-02-04 01:04:31.467066 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467070 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.058) 0:01:19.624 **** 2026-02-04 01:04:31.467074 | orchestrator | 2026-02-04 01:04:31.467078 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467082 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.170) 0:01:19.795 **** 2026-02-04 01:04:31.467086 | orchestrator | 2026-02-04 01:04:31.467090 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467094 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.058) 0:01:19.853 **** 2026-02-04 01:04:31.467097 | orchestrator | 2026-02-04 01:04:31.467101 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 01:04:31.467105 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.058) 0:01:19.911 **** 2026-02-04 01:04:31.467109 | orchestrator | 2026-02-04 01:04:31.467113 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-04 01:04:31.467117 | orchestrator | Wednesday 04 February 2026 01:02:59 +0000 (0:00:00.080) 0:01:19.992 **** 2026-02-04 01:04:31.467121 | orchestrator | changed: [testbed-manager] 2026-02-04 01:04:31.467125 | orchestrator | 2026-02-04 01:04:31.467129 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-04 01:04:31.467132 | orchestrator | Wednesday 04 February 2026 01:03:16 +0000 (0:00:17.056) 0:01:37.049 **** 2026-02-04 01:04:31.467136 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:04:31.467140 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:04:31.467144 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:31.467148 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:31.467152 | orchestrator | changed: [testbed-manager] 2026-02-04 01:04:31.467155 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:04:31.467159 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:31.467163 | orchestrator | 2026-02-04 01:04:31.467167 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-04 01:04:31.467174 | orchestrator | Wednesday 04 February 2026 01:03:31 +0000 (0:00:14.593) 0:01:51.642 **** 2026-02-04 01:04:31.467178 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:31.467182 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:31.467186 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:31.467190 | orchestrator | 2026-02-04 01:04:31.467194 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-04 01:04:31.467197 | orchestrator | Wednesday 04 February 2026 01:03:42 +0000 (0:00:11.060) 0:02:02.703 **** 2026-02-04 01:04:31.467201 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:31.467205 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:31.467209 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:31.467213 | orchestrator | 2026-02-04 01:04:31.467217 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-04 01:04:31.467221 | orchestrator | Wednesday 04 February 2026 01:03:53 +0000 (0:00:11.410) 0:02:14.114 **** 2026-02-04 01:04:31.467225 | orchestrator | changed: [testbed-manager] 2026-02-04 01:04:31.467229 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:31.467233 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:04:31.467237 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:04:31.467241 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:31.467244 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:04:31.467251 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:31.467255 | orchestrator | 2026-02-04 01:04:31.467259 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-04 01:04:31.467265 | orchestrator | Wednesday 04 February 2026 01:04:07 +0000 (0:00:14.094) 0:02:28.208 **** 2026-02-04 01:04:31.467269 | orchestrator | changed: [testbed-manager] 2026-02-04 01:04:31.467273 | orchestrator | 2026-02-04 01:04:31.467277 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-04 01:04:31.467281 | orchestrator | Wednesday 04 February 2026 01:04:14 +0000 (0:00:06.778) 0:02:34.987 **** 2026-02-04 01:04:31.467285 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:04:31.467289 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:04:31.467292 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:04:31.467296 | orchestrator | 2026-02-04 01:04:31.467300 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-04 01:04:31.467304 | orchestrator | Wednesday 04 February 2026 01:04:20 +0000 (0:00:05.591) 0:02:40.578 **** 2026-02-04 01:04:31.467308 | orchestrator | changed: [testbed-manager] 2026-02-04 01:04:31.467312 | orchestrator | 2026-02-04 01:04:31.467316 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-04 01:04:31.467320 | orchestrator | Wednesday 04 February 2026 01:04:24 +0000 (0:00:03.925) 0:02:44.503 **** 2026-02-04 01:04:31.467324 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:04:31.467328 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:04:31.467331 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:04:31.467335 | orchestrator | 2026-02-04 01:04:31.467339 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:04:31.467343 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 01:04:31.467348 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:04:31.467352 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:04:31.467356 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:04:31.467359 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:04:31.467366 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:04:31.467370 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:04:31.467374 | orchestrator | 2026-02-04 01:04:31.467377 | orchestrator | 2026-02-04 01:04:31.467381 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:04:31.467385 | orchestrator | Wednesday 04 February 2026 01:04:29 +0000 (0:00:05.057) 0:02:49.560 **** 2026-02-04 01:04:31.467389 | orchestrator | =============================================================================== 2026-02-04 01:04:31.467393 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.91s 2026-02-04 01:04:31.467397 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.06s 2026-02-04 01:04:31.467401 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.59s 2026-02-04 01:04:31.467405 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.09s 2026-02-04 01:04:31.467409 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.44s 2026-02-04 01:04:31.467412 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.41s 2026-02-04 01:04:31.467416 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.06s 2026-02-04 01:04:31.467420 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.78s 2026-02-04 01:04:31.467424 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.59s 2026-02-04 01:04:31.467428 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.58s 2026-02-04 01:04:31.467432 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.16s 2026-02-04 01:04:31.467436 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.06s 2026-02-04 01:04:31.467440 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.38s 2026-02-04 01:04:31.467443 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 3.93s 2026-02-04 01:04:31.467447 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.67s 2026-02-04 01:04:31.467451 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.62s 2026-02-04 01:04:31.467455 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.10s 2026-02-04 01:04:31.467459 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.91s 2026-02-04 01:04:31.467463 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.81s 2026-02-04 01:04:31.467467 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.56s 2026-02-04 01:04:31.467473 | orchestrator | 2026-02-04 01:04:31 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:31.467479 | orchestrator | 2026-02-04 01:04:31 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:31.467940 | orchestrator | 2026-02-04 01:04:31 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:31.468109 | orchestrator | 2026-02-04 01:04:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:34.514293 | orchestrator | 2026-02-04 01:04:34 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:34.516099 | orchestrator | 2026-02-04 01:04:34 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:34.518073 | orchestrator | 2026-02-04 01:04:34 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:34.519697 | orchestrator | 2026-02-04 01:04:34 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:34.520331 | orchestrator | 2026-02-04 01:04:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:37.555507 | orchestrator | 2026-02-04 01:04:37 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:37.557866 | orchestrator | 2026-02-04 01:04:37 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:37.559396 | orchestrator | 2026-02-04 01:04:37 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:37.561730 | orchestrator | 2026-02-04 01:04:37 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:37.561784 | orchestrator | 2026-02-04 01:04:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:40.607103 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:40.608646 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:40.610002 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:40.611384 | orchestrator | 2026-02-04 01:04:40 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:40.611414 | orchestrator | 2026-02-04 01:04:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:43.654595 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:43.656502 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:43.658061 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:43.659580 | orchestrator | 2026-02-04 01:04:43 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:43.659677 | orchestrator | 2026-02-04 01:04:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:46.703619 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:46.706496 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:46.708941 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:46.710973 | orchestrator | 2026-02-04 01:04:46 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:46.711018 | orchestrator | 2026-02-04 01:04:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:49.754904 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:49.757066 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:49.758702 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:49.760021 | orchestrator | 2026-02-04 01:04:49 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:49.760052 | orchestrator | 2026-02-04 01:04:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:52.796076 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:52.798209 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:52.800253 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state STARTED 2026-02-04 01:04:52.802555 | orchestrator | 2026-02-04 01:04:52 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:52.802623 | orchestrator | 2026-02-04 01:04:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:55.851954 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:55.853399 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:04:55.854602 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:55.855670 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task 2f41f4f6-2463-48a1-8d73-5e2e7ef48679 is in state SUCCESS 2026-02-04 01:04:55.857173 | orchestrator | 2026-02-04 01:04:55 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:55.857218 | orchestrator | 2026-02-04 01:04:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:04:58.907129 | orchestrator | 2026-02-04 01:04:58 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:04:58.909079 | orchestrator | 2026-02-04 01:04:58 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:04:58.910778 | orchestrator | 2026-02-04 01:04:58 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:04:58.913870 | orchestrator | 2026-02-04 01:04:58 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:04:58.913914 | orchestrator | 2026-02-04 01:04:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:01.946465 | orchestrator | 2026-02-04 01:05:01 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:01.946526 | orchestrator | 2026-02-04 01:05:01 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:01.947049 | orchestrator | 2026-02-04 01:05:01 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:01.948412 | orchestrator | 2026-02-04 01:05:01 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:01.949013 | orchestrator | 2026-02-04 01:05:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:04.990941 | orchestrator | 2026-02-04 01:05:04 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:04.991003 | orchestrator | 2026-02-04 01:05:04 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:04.991581 | orchestrator | 2026-02-04 01:05:04 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:04.992548 | orchestrator | 2026-02-04 01:05:04 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:04.992580 | orchestrator | 2026-02-04 01:05:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:08.036437 | orchestrator | 2026-02-04 01:05:08 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:08.038571 | orchestrator | 2026-02-04 01:05:08 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:08.039773 | orchestrator | 2026-02-04 01:05:08 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:08.043809 | orchestrator | 2026-02-04 01:05:08 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:08.043889 | orchestrator | 2026-02-04 01:05:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:11.094112 | orchestrator | 2026-02-04 01:05:11 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:11.096583 | orchestrator | 2026-02-04 01:05:11 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:11.098062 | orchestrator | 2026-02-04 01:05:11 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:11.099586 | orchestrator | 2026-02-04 01:05:11 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:11.100064 | orchestrator | 2026-02-04 01:05:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:14.124484 | orchestrator | 2026-02-04 01:05:14 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:14.124787 | orchestrator | 2026-02-04 01:05:14 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:14.125813 | orchestrator | 2026-02-04 01:05:14 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:14.127551 | orchestrator | 2026-02-04 01:05:14 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:14.127611 | orchestrator | 2026-02-04 01:05:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:17.157646 | orchestrator | 2026-02-04 01:05:17 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:17.158785 | orchestrator | 2026-02-04 01:05:17 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:17.159505 | orchestrator | 2026-02-04 01:05:17 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:17.160382 | orchestrator | 2026-02-04 01:05:17 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:17.160480 | orchestrator | 2026-02-04 01:05:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:20.192626 | orchestrator | 2026-02-04 01:05:20 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:20.195349 | orchestrator | 2026-02-04 01:05:20 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:20.197934 | orchestrator | 2026-02-04 01:05:20 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:20.200600 | orchestrator | 2026-02-04 01:05:20 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:20.200982 | orchestrator | 2026-02-04 01:05:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:23.240945 | orchestrator | 2026-02-04 01:05:23 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:23.242781 | orchestrator | 2026-02-04 01:05:23 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:23.246146 | orchestrator | 2026-02-04 01:05:23 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:23.247816 | orchestrator | 2026-02-04 01:05:23 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:23.247844 | orchestrator | 2026-02-04 01:05:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:26.283420 | orchestrator | 2026-02-04 01:05:26 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:26.284175 | orchestrator | 2026-02-04 01:05:26 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:26.285263 | orchestrator | 2026-02-04 01:05:26 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:26.286507 | orchestrator | 2026-02-04 01:05:26 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:26.286561 | orchestrator | 2026-02-04 01:05:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:29.326381 | orchestrator | 2026-02-04 01:05:29 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:29.326553 | orchestrator | 2026-02-04 01:05:29 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:29.326577 | orchestrator | 2026-02-04 01:05:29 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:29.327252 | orchestrator | 2026-02-04 01:05:29 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:29.327276 | orchestrator | 2026-02-04 01:05:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:32.368347 | orchestrator | 2026-02-04 01:05:32 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:32.371837 | orchestrator | 2026-02-04 01:05:32 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:32.375552 | orchestrator | 2026-02-04 01:05:32 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:32.377750 | orchestrator | 2026-02-04 01:05:32 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state STARTED 2026-02-04 01:05:32.377830 | orchestrator | 2026-02-04 01:05:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:35.426438 | orchestrator | 2026-02-04 01:05:35 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:35.428068 | orchestrator | 2026-02-04 01:05:35 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:35.430130 | orchestrator | 2026-02-04 01:05:35 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:35.431772 | orchestrator | 2026-02-04 01:05:35 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:35.433993 | orchestrator | 2026-02-04 01:05:35 | INFO  | Task 03a02d33-3e5b-4aae-86de-2ea2b7c4afad is in state SUCCESS 2026-02-04 01:05:35.435691 | orchestrator | 2026-02-04 01:05:35.435734 | orchestrator | 2026-02-04 01:05:35.435742 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-04 01:05:35.435749 | orchestrator | 2026-02-04 01:05:35.435755 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-04 01:05:35.435761 | orchestrator | Wednesday 04 February 2026 00:58:19 +0000 (0:00:00.086) 0:00:00.086 **** 2026-02-04 01:05:35.435767 | orchestrator | changed: [localhost] 2026-02-04 01:05:35.435775 | orchestrator | 2026-02-04 01:05:35.435782 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-04 01:05:35.435789 | orchestrator | Wednesday 04 February 2026 00:58:20 +0000 (0:00:00.846) 0:00:00.933 **** 2026-02-04 01:05:35.435793 | orchestrator | 2026-02-04 01:05:35.435797 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435801 | orchestrator | 2026-02-04 01:05:35.435805 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435809 | orchestrator | 2026-02-04 01:05:35.435812 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435816 | orchestrator | 2026-02-04 01:05:35.435820 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435824 | orchestrator | 2026-02-04 01:05:35.435828 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435832 | orchestrator | 2026-02-04 01:05:35.435836 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435852 | orchestrator | 2026-02-04 01:05:35.435856 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-02-04 01:05:35.435860 | orchestrator | changed: [localhost] 2026-02-04 01:05:35.435864 | orchestrator | 2026-02-04 01:05:35.435868 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-04 01:05:35.435872 | orchestrator | Wednesday 04 February 2026 01:04:16 +0000 (0:05:56.314) 0:05:57.247 **** 2026-02-04 01:05:35.435876 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-02-04 01:05:35.435880 | orchestrator | changed: [localhost] 2026-02-04 01:05:35.435884 | orchestrator | 2026-02-04 01:05:35.435888 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:05:35.435905 | orchestrator | 2026-02-04 01:05:35.435985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:05:35.435991 | orchestrator | Wednesday 04 February 2026 01:04:51 +0000 (0:00:35.461) 0:06:32.709 **** 2026-02-04 01:05:35.435995 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:35.435999 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:35.436003 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:35.436007 | orchestrator | 2026-02-04 01:05:35.436011 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:05:35.436015 | orchestrator | Wednesday 04 February 2026 01:04:52 +0000 (0:00:00.313) 0:06:33.022 **** 2026-02-04 01:05:35.436019 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-04 01:05:35.436022 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-04 01:05:35.436027 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-04 01:05:35.436031 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-04 01:05:35.436035 | orchestrator | 2026-02-04 01:05:35.436039 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-04 01:05:35.436050 | orchestrator | skipping: no hosts matched 2026-02-04 01:05:35.436054 | orchestrator | 2026-02-04 01:05:35.436058 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:05:35.436062 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:35.436068 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:35.436073 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:35.436077 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:05:35.436081 | orchestrator | 2026-02-04 01:05:35.436085 | orchestrator | 2026-02-04 01:05:35.436088 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:05:35.436092 | orchestrator | Wednesday 04 February 2026 01:04:52 +0000 (0:00:00.550) 0:06:33.573 **** 2026-02-04 01:05:35.436096 | orchestrator | =============================================================================== 2026-02-04 01:05:35.436100 | orchestrator | Download ironic-agent initramfs --------------------------------------- 356.31s 2026-02-04 01:05:35.436104 | orchestrator | Download ironic-agent kernel ------------------------------------------- 35.46s 2026-02-04 01:05:35.436108 | orchestrator | Ensure the destination directory exists --------------------------------- 0.85s 2026-02-04 01:05:35.436112 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.55s 2026-02-04 01:05:35.436116 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-02-04 01:05:35.436119 | orchestrator | 2026-02-04 01:05:35.436123 | orchestrator | 2026-02-04 01:05:35.436127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:05:35.436131 | orchestrator | 2026-02-04 01:05:35.436135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:05:35.436144 | orchestrator | Wednesday 04 February 2026 01:02:50 +0000 (0:00:00.196) 0:00:00.196 **** 2026-02-04 01:05:35.436148 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:35.436152 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:35.436156 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:35.436160 | orchestrator | 2026-02-04 01:05:35.436170 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:05:35.436174 | orchestrator | Wednesday 04 February 2026 01:02:50 +0000 (0:00:00.285) 0:00:00.482 **** 2026-02-04 01:05:35.436178 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-04 01:05:35.436188 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-04 01:05:35.436192 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-04 01:05:35.436196 | orchestrator | 2026-02-04 01:05:35.436199 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-04 01:05:35.436203 | orchestrator | 2026-02-04 01:05:35.436207 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:05:35.436211 | orchestrator | Wednesday 04 February 2026 01:02:50 +0000 (0:00:00.331) 0:00:00.813 **** 2026-02-04 01:05:35.436215 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:35.436219 | orchestrator | 2026-02-04 01:05:35.436223 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-04 01:05:35.436227 | orchestrator | Wednesday 04 February 2026 01:02:51 +0000 (0:00:00.728) 0:00:01.541 **** 2026-02-04 01:05:35.436230 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-04 01:05:35.436234 | orchestrator | 2026-02-04 01:05:35.436238 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-04 01:05:35.436242 | orchestrator | Wednesday 04 February 2026 01:02:54 +0000 (0:00:03.354) 0:00:04.896 **** 2026-02-04 01:05:35.436246 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-04 01:05:35.436250 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-04 01:05:35.436254 | orchestrator | 2026-02-04 01:05:35.436258 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-04 01:05:35.436261 | orchestrator | Wednesday 04 February 2026 01:03:01 +0000 (0:00:06.804) 0:00:11.700 **** 2026-02-04 01:05:35.436265 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:05:35.436269 | orchestrator | 2026-02-04 01:05:35.436273 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-04 01:05:35.436277 | orchestrator | Wednesday 04 February 2026 01:03:05 +0000 (0:00:03.416) 0:00:15.117 **** 2026-02-04 01:05:35.436281 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:05:35.436285 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-04 01:05:35.436289 | orchestrator | 2026-02-04 01:05:35.436292 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-04 01:05:35.436296 | orchestrator | Wednesday 04 February 2026 01:03:09 +0000 (0:00:04.035) 0:00:19.152 **** 2026-02-04 01:05:35.436300 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:05:35.436304 | orchestrator | 2026-02-04 01:05:35.436308 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-04 01:05:35.436312 | orchestrator | Wednesday 04 February 2026 01:03:12 +0000 (0:00:03.377) 0:00:22.529 **** 2026-02-04 01:05:35.436316 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-04 01:05:35.436320 | orchestrator | 2026-02-04 01:05:35.436323 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-04 01:05:35.436327 | orchestrator | Wednesday 04 February 2026 01:03:16 +0000 (0:00:03.887) 0:00:26.417 **** 2026-02-04 01:05:35.436334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436365 | orchestrator | 2026-02-04 01:05:35.436369 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:05:35.436373 | orchestrator | Wednesday 04 February 2026 01:03:23 +0000 (0:00:06.592) 0:00:33.010 **** 2026-02-04 01:05:35.436377 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:35.436381 | orchestrator | 2026-02-04 01:05:35.436384 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-04 01:05:35.436388 | orchestrator | Wednesday 04 February 2026 01:03:23 +0000 (0:00:00.559) 0:00:33.569 **** 2026-02-04 01:05:35.436392 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:35.436396 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.436400 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:35.436404 | orchestrator | 2026-02-04 01:05:35.436408 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-04 01:05:35.436412 | orchestrator | Wednesday 04 February 2026 01:03:27 +0000 (0:00:03.434) 0:00:37.003 **** 2026-02-04 01:05:35.436416 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:35.436422 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:35.436426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:35.436431 | orchestrator | 2026-02-04 01:05:35.436438 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-04 01:05:35.436443 | orchestrator | Wednesday 04 February 2026 01:03:28 +0000 (0:00:01.467) 0:00:38.470 **** 2026-02-04 01:05:35.436449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:35.436455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:35.436461 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:35.436467 | orchestrator | 2026-02-04 01:05:35.436473 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-04 01:05:35.436479 | orchestrator | Wednesday 04 February 2026 01:03:29 +0000 (0:00:01.098) 0:00:39.569 **** 2026-02-04 01:05:35.436486 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:35.436492 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:35.436499 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:35.436505 | orchestrator | 2026-02-04 01:05:35.436537 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-04 01:05:35.436542 | orchestrator | Wednesday 04 February 2026 01:03:30 +0000 (0:00:00.627) 0:00:40.197 **** 2026-02-04 01:05:35.436546 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.436550 | orchestrator | 2026-02-04 01:05:35.436554 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-04 01:05:35.436557 | orchestrator | Wednesday 04 February 2026 01:03:30 +0000 (0:00:00.352) 0:00:40.550 **** 2026-02-04 01:05:35.436565 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.436569 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.436573 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.436577 | orchestrator | 2026-02-04 01:05:35.436581 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:05:35.436584 | orchestrator | Wednesday 04 February 2026 01:03:30 +0000 (0:00:00.321) 0:00:40.871 **** 2026-02-04 01:05:35.436588 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:35.436592 | orchestrator | 2026-02-04 01:05:35.436596 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-04 01:05:35.436600 | orchestrator | Wednesday 04 February 2026 01:03:31 +0000 (0:00:00.528) 0:00:41.400 **** 2026-02-04 01:05:35.436604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436630 | orchestrator | 2026-02-04 01:05:35.436635 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-04 01:05:35.436640 | orchestrator | Wednesday 04 February 2026 01:03:36 +0000 (0:00:04.918) 0:00:46.318 **** 2026-02-04 01:05:35.436651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:05:35.436659 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.436666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:05:35.436687 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.436698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:05:35.436708 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.436714 | orchestrator | 2026-02-04 01:05:35.436718 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-04 01:05:35.436732 | orchestrator | Wednesday 04 February 2026 01:03:38 +0000 (0:00:02.329) 0:00:48.648 **** 2026-02-04 01:05:35.436737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:05:35.436751 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.436756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:05:35.436761 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.436772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 01:05:35.436780 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.436785 | orchestrator | 2026-02-04 01:05:35.436790 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-04 01:05:35.436794 | orchestrator | Wednesday 04 February 2026 01:03:41 +0000 (0:00:02.873) 0:00:51.521 **** 2026-02-04 01:05:35.436798 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.436802 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.436806 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.436809 | orchestrator | 2026-02-04 01:05:35.436813 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-04 01:05:35.436817 | orchestrator | Wednesday 04 February 2026 01:03:45 +0000 (0:00:04.250) 0:00:55.772 **** 2026-02-04 01:05:35.436821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.436843 | orchestrator | 2026-02-04 01:05:35.436847 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-04 01:05:35.436851 | orchestrator | Wednesday 04 February 2026 01:03:49 +0000 (0:00:03.856) 0:00:59.629 **** 2026-02-04 01:05:35.436855 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:35.436859 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.436862 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:35.436866 | orchestrator | 2026-02-04 01:05:35.436870 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-04 01:05:35.436874 | orchestrator | Wednesday 04 February 2026 01:03:54 +0000 (0:00:05.202) 0:01:04.832 **** 2026-02-04 01:05:35.436879 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.436886 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.436975 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.436993 | orchestrator | 2026-02-04 01:05:35.436998 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-04 01:05:35.437002 | orchestrator | Wednesday 04 February 2026 01:04:00 +0000 (0:00:05.224) 0:01:10.056 **** 2026-02-04 01:05:35.437006 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.437015 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.437019 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.437027 | orchestrator | 2026-02-04 01:05:35.437031 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-04 01:05:35.437035 | orchestrator | Wednesday 04 February 2026 01:04:03 +0000 (0:00:03.268) 0:01:13.325 **** 2026-02-04 01:05:35.437039 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.437043 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.437046 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.437050 | orchestrator | 2026-02-04 01:05:35.437057 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-04 01:05:35.437061 | orchestrator | Wednesday 04 February 2026 01:04:06 +0000 (0:00:03.434) 0:01:16.760 **** 2026-02-04 01:05:35.437065 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.437069 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.437078 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.437082 | orchestrator | 2026-02-04 01:05:35.437086 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-04 01:05:35.437090 | orchestrator | Wednesday 04 February 2026 01:04:10 +0000 (0:00:03.387) 0:01:20.148 **** 2026-02-04 01:05:35.437094 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.437098 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.437101 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.437105 | orchestrator | 2026-02-04 01:05:35.437109 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-04 01:05:35.437113 | orchestrator | Wednesday 04 February 2026 01:04:10 +0000 (0:00:00.319) 0:01:20.467 **** 2026-02-04 01:05:35.437117 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 01:05:35.437121 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.437125 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 01:05:35.437129 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.437133 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 01:05:35.437137 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.437140 | orchestrator | 2026-02-04 01:05:35.437144 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-04 01:05:35.437148 | orchestrator | Wednesday 04 February 2026 01:04:16 +0000 (0:00:05.816) 0:01:26.284 **** 2026-02-04 01:05:35.437152 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:35.437156 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437160 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:35.437164 | orchestrator | 2026-02-04 01:05:35.437168 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-04 01:05:35.437172 | orchestrator | Wednesday 04 February 2026 01:04:20 +0000 (0:00:04.446) 0:01:30.730 **** 2026-02-04 01:05:35.437176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.437189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.437194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 01:05:35.437198 | orchestrator | 2026-02-04 01:05:35.437202 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 01:05:35.437209 | orchestrator | Wednesday 04 February 2026 01:04:24 +0000 (0:00:04.025) 0:01:34.755 **** 2026-02-04 01:05:35.437213 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:35.437217 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:35.437220 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:35.437224 | orchestrator | 2026-02-04 01:05:35.437228 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-04 01:05:35.437232 | orchestrator | Wednesday 04 February 2026 01:04:25 +0000 (0:00:00.593) 0:01:35.349 **** 2026-02-04 01:05:35.437237 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437244 | orchestrator | 2026-02-04 01:05:35.437252 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-04 01:05:35.437260 | orchestrator | Wednesday 04 February 2026 01:04:27 +0000 (0:00:02.259) 0:01:37.608 **** 2026-02-04 01:05:35.437266 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437272 | orchestrator | 2026-02-04 01:05:35.437278 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-04 01:05:35.437284 | orchestrator | Wednesday 04 February 2026 01:04:29 +0000 (0:00:02.273) 0:01:39.882 **** 2026-02-04 01:05:35.437290 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437295 | orchestrator | 2026-02-04 01:05:35.437301 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-04 01:05:35.437307 | orchestrator | Wednesday 04 February 2026 01:04:32 +0000 (0:00:02.069) 0:01:41.951 **** 2026-02-04 01:05:35.437313 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437320 | orchestrator | 2026-02-04 01:05:35.437326 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-04 01:05:35.437333 | orchestrator | Wednesday 04 February 2026 01:05:01 +0000 (0:00:29.407) 0:02:11.359 **** 2026-02-04 01:05:35.437338 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437344 | orchestrator | 2026-02-04 01:05:35.437350 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 01:05:35.437357 | orchestrator | Wednesday 04 February 2026 01:05:03 +0000 (0:00:02.265) 0:02:13.625 **** 2026-02-04 01:05:35.437362 | orchestrator | 2026-02-04 01:05:35.437368 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 01:05:35.437378 | orchestrator | Wednesday 04 February 2026 01:05:04 +0000 (0:00:00.334) 0:02:13.959 **** 2026-02-04 01:05:35.437385 | orchestrator | 2026-02-04 01:05:35.437390 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 01:05:35.437417 | orchestrator | Wednesday 04 February 2026 01:05:04 +0000 (0:00:00.073) 0:02:14.032 **** 2026-02-04 01:05:35.437421 | orchestrator | 2026-02-04 01:05:35.437425 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-04 01:05:35.437429 | orchestrator | Wednesday 04 February 2026 01:05:04 +0000 (0:00:00.068) 0:02:14.101 **** 2026-02-04 01:05:35.437433 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:35.437437 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:35.437441 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:35.437445 | orchestrator | 2026-02-04 01:05:35.437449 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:05:35.437453 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 01:05:35.437458 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:05:35.437462 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 01:05:35.437466 | orchestrator | 2026-02-04 01:05:35.437470 | orchestrator | 2026-02-04 01:05:35.437474 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:05:35.437477 | orchestrator | Wednesday 04 February 2026 01:05:32 +0000 (0:00:28.392) 0:02:42.493 **** 2026-02-04 01:05:35.437485 | orchestrator | =============================================================================== 2026-02-04 01:05:35.437490 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.41s 2026-02-04 01:05:35.437496 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.39s 2026-02-04 01:05:35.437502 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.80s 2026-02-04 01:05:35.437511 | orchestrator | glance : Ensuring config directories exist ------------------------------ 6.59s 2026-02-04 01:05:35.437518 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 5.82s 2026-02-04 01:05:35.437525 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.22s 2026-02-04 01:05:35.437531 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.20s 2026-02-04 01:05:35.437537 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.92s 2026-02-04 01:05:35.437543 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.45s 2026-02-04 01:05:35.437549 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.25s 2026-02-04 01:05:35.437555 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.04s 2026-02-04 01:05:35.437561 | orchestrator | glance : Check glance containers ---------------------------------------- 4.03s 2026-02-04 01:05:35.437567 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.89s 2026-02-04 01:05:35.437573 | orchestrator | glance : Copying over config.json files for services -------------------- 3.86s 2026-02-04 01:05:35.437579 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.43s 2026-02-04 01:05:35.437585 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.43s 2026-02-04 01:05:35.437591 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.42s 2026-02-04 01:05:35.437597 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.39s 2026-02-04 01:05:35.437604 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.38s 2026-02-04 01:05:35.437610 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.35s 2026-02-04 01:05:35.437617 | orchestrator | 2026-02-04 01:05:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:38.480488 | orchestrator | 2026-02-04 01:05:38 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:38.482286 | orchestrator | 2026-02-04 01:05:38 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:38.484340 | orchestrator | 2026-02-04 01:05:38 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:38.486410 | orchestrator | 2026-02-04 01:05:38 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:38.486642 | orchestrator | 2026-02-04 01:05:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:41.534942 | orchestrator | 2026-02-04 01:05:41 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:41.536101 | orchestrator | 2026-02-04 01:05:41 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:41.537551 | orchestrator | 2026-02-04 01:05:41 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:41.538945 | orchestrator | 2026-02-04 01:05:41 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:41.539338 | orchestrator | 2026-02-04 01:05:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:44.579104 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:44.579871 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:44.580454 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:44.582049 | orchestrator | 2026-02-04 01:05:44 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:44.582096 | orchestrator | 2026-02-04 01:05:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:47.625979 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:47.627163 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:47.629494 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:47.631625 | orchestrator | 2026-02-04 01:05:47 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:47.632004 | orchestrator | 2026-02-04 01:05:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:50.680721 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:50.683819 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:50.686272 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:50.688604 | orchestrator | 2026-02-04 01:05:50 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:50.688655 | orchestrator | 2026-02-04 01:05:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:53.726373 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:53.727444 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:53.729428 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:53.731457 | orchestrator | 2026-02-04 01:05:53 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:53.731506 | orchestrator | 2026-02-04 01:05:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:56.777662 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state STARTED 2026-02-04 01:05:56.778703 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:56.780251 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:56.782450 | orchestrator | 2026-02-04 01:05:56 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:56.782500 | orchestrator | 2026-02-04 01:05:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:05:59.823373 | orchestrator | 2026-02-04 01:05:59.823430 | orchestrator | 2026-02-04 01:05:59.823440 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:05:59.823447 | orchestrator | 2026-02-04 01:05:59.823455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:05:59.823462 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:00.229) 0:00:00.229 **** 2026-02-04 01:05:59.823469 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:05:59.823476 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:05:59.823482 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:05:59.823499 | orchestrator | 2026-02-04 01:05:59.823511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:05:59.823534 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:00.275) 0:00:00.504 **** 2026-02-04 01:05:59.823541 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-04 01:05:59.823548 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-04 01:05:59.823553 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-04 01:05:59.823556 | orchestrator | 2026-02-04 01:05:59.823560 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-04 01:05:59.823564 | orchestrator | 2026-02-04 01:05:59.823571 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:05:59.823577 | orchestrator | Wednesday 04 February 2026 01:03:10 +0000 (0:00:00.334) 0:00:00.839 **** 2026-02-04 01:05:59.823583 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:59.823590 | orchestrator | 2026-02-04 01:05:59.823655 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-04 01:05:59.823674 | orchestrator | Wednesday 04 February 2026 01:03:11 +0000 (0:00:00.467) 0:00:01.307 **** 2026-02-04 01:05:59.823736 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-04 01:05:59.823741 | orchestrator | 2026-02-04 01:05:59.823744 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-04 01:05:59.823748 | orchestrator | Wednesday 04 February 2026 01:03:14 +0000 (0:00:03.490) 0:00:04.798 **** 2026-02-04 01:05:59.823753 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-04 01:05:59.823757 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-04 01:05:59.823761 | orchestrator | 2026-02-04 01:05:59.823765 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-04 01:05:59.823768 | orchestrator | Wednesday 04 February 2026 01:03:21 +0000 (0:00:06.854) 0:00:11.652 **** 2026-02-04 01:05:59.823772 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:05:59.823875 | orchestrator | 2026-02-04 01:05:59.823882 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-04 01:05:59.823886 | orchestrator | Wednesday 04 February 2026 01:03:25 +0000 (0:00:03.394) 0:00:15.047 **** 2026-02-04 01:05:59.823900 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:05:59.823905 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-04 01:05:59.823909 | orchestrator | 2026-02-04 01:05:59.823913 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-04 01:05:59.823946 | orchestrator | Wednesday 04 February 2026 01:03:29 +0000 (0:00:04.016) 0:00:19.064 **** 2026-02-04 01:05:59.823951 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:05:59.823955 | orchestrator | 2026-02-04 01:05:59.823958 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-04 01:05:59.823962 | orchestrator | Wednesday 04 February 2026 01:03:32 +0000 (0:00:03.599) 0:00:22.664 **** 2026-02-04 01:05:59.823966 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-04 01:05:59.823970 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-04 01:05:59.823974 | orchestrator | 2026-02-04 01:05:59.823978 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-04 01:05:59.823981 | orchestrator | Wednesday 04 February 2026 01:03:40 +0000 (0:00:07.448) 0:00:30.113 **** 2026-02-04 01:05:59.823988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824073 | orchestrator | 2026-02-04 01:05:59.824077 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:05:59.824081 | orchestrator | Wednesday 04 February 2026 01:03:42 +0000 (0:00:02.387) 0:00:32.501 **** 2026-02-04 01:05:59.824086 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.824090 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.824094 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.824098 | orchestrator | 2026-02-04 01:05:59.824101 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:05:59.824105 | orchestrator | Wednesday 04 February 2026 01:03:43 +0000 (0:00:00.497) 0:00:32.998 **** 2026-02-04 01:05:59.824109 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:59.824113 | orchestrator | 2026-02-04 01:05:59.824119 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-04 01:05:59.824124 | orchestrator | Wednesday 04 February 2026 01:03:44 +0000 (0:00:01.396) 0:00:34.394 **** 2026-02-04 01:05:59.824127 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-04 01:05:59.824132 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-04 01:05:59.824136 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-04 01:05:59.824140 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-04 01:05:59.824144 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-04 01:05:59.824148 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-04 01:05:59.824152 | orchestrator | 2026-02-04 01:05:59.824155 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-04 01:05:59.824159 | orchestrator | Wednesday 04 February 2026 01:03:46 +0000 (0:00:02.050) 0:00:36.445 **** 2026-02-04 01:05:59.824166 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:05:59.824189 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:05:59.824197 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:05:59.824201 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:05:59.824208 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:05:59.824215 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 01:05:59.824219 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:05:59.824260 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:05:59.824266 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:05:59.824273 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:05:59.824280 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:05:59.824284 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 01:05:59.824291 | orchestrator | 2026-02-04 01:05:59.824294 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-04 01:05:59.824298 | orchestrator | Wednesday 04 February 2026 01:03:49 +0000 (0:00:03.446) 0:00:39.892 **** 2026-02-04 01:05:59.824303 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:59.824307 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:59.824311 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 01:05:59.824315 | orchestrator | 2026-02-04 01:05:59.824319 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-04 01:05:59.824323 | orchestrator | Wednesday 04 February 2026 01:03:51 +0000 (0:00:01.962) 0:00:41.854 **** 2026-02-04 01:05:59.824327 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-04 01:05:59.824331 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-04 01:05:59.824335 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-04 01:05:59.824338 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:05:59.824342 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:05:59.824346 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 01:05:59.824350 | orchestrator | 2026-02-04 01:05:59.824354 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-04 01:05:59.824358 | orchestrator | Wednesday 04 February 2026 01:03:54 +0000 (0:00:02.883) 0:00:44.738 **** 2026-02-04 01:05:59.824362 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-04 01:05:59.824366 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-04 01:05:59.824369 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-04 01:05:59.824373 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-04 01:05:59.824377 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-04 01:05:59.824381 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-04 01:05:59.824385 | orchestrator | 2026-02-04 01:05:59.824389 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-04 01:05:59.824393 | orchestrator | Wednesday 04 February 2026 01:03:56 +0000 (0:00:01.685) 0:00:46.424 **** 2026-02-04 01:05:59.824397 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.824401 | orchestrator | 2026-02-04 01:05:59.824532 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-04 01:05:59.824536 | orchestrator | Wednesday 04 February 2026 01:03:56 +0000 (0:00:00.297) 0:00:46.727 **** 2026-02-04 01:05:59.824540 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.824544 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.824559 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.824564 | orchestrator | 2026-02-04 01:05:59.824568 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:05:59.824572 | orchestrator | Wednesday 04 February 2026 01:03:57 +0000 (0:00:00.738) 0:00:47.465 **** 2026-02-04 01:05:59.824576 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:05:59.824580 | orchestrator | 2026-02-04 01:05:59.824584 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-04 01:05:59.824588 | orchestrator | Wednesday 04 February 2026 01:03:58 +0000 (0:00:00.874) 0:00:48.340 **** 2026-02-04 01:05:59.824595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/2026-02-04 01:05:59 | INFO  | Task ca7dbfd6-1e95-4a20-81df-751168ccfc72 is in state SUCCESS 2026-02-04 01:05:59.824676 | orchestrator | kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824689 | orchestrator | 2026-02-04 01:05:59.824693 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-04 01:05:59.824697 | orchestrator | Wednesday 04 February 2026 01:04:03 +0000 (0:00:04.771) 0:00:53.111 **** 2026-02-04 01:05:59.824702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.824706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824725 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.824731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.824735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824748 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.824754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.824762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824776 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.824780 | orchestrator | 2026-02-04 01:05:59.824784 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-04 01:05:59.824788 | orchestrator | Wednesday 04 February 2026 01:04:03 +0000 (0:00:00.672) 0:00:53.784 **** 2026-02-04 01:05:59.824792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.824796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824817 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.824821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.824833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824850 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.824855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.824861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.824873 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.824877 | orchestrator | 2026-02-04 01:05:59.824881 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-04 01:05:59.824885 | orchestrator | Wednesday 04 February 2026 01:04:05 +0000 (0:00:01.579) 0:00:55.364 **** 2026-02-04 01:05:59.824891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.824909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.824980 | orchestrator | 2026-02-04 01:05:59.824983 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-04 01:05:59.824989 | orchestrator | Wednesday 04 February 2026 01:04:09 +0000 (0:00:04.068) 0:00:59.433 **** 2026-02-04 01:05:59.824994 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 01:05:59.824998 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 01:05:59.825001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 01:05:59.825005 | orchestrator | 2026-02-04 01:05:59.825009 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-04 01:05:59.825013 | orchestrator | Wednesday 04 February 2026 01:04:11 +0000 (0:00:01.598) 0:01:01.031 **** 2026-02-04 01:05:59.825019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.825023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.825027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.825034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825106 | orchestrator | 2026-02-04 01:05:59.825121 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-04 01:05:59.825128 | orchestrator | Wednesday 04 February 2026 01:04:25 +0000 (0:00:14.019) 0:01:15.050 **** 2026-02-04 01:05:59.825134 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825141 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.825147 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.825154 | orchestrator | 2026-02-04 01:05:59.825160 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-04 01:05:59.825167 | orchestrator | Wednesday 04 February 2026 01:04:27 +0000 (0:00:02.002) 0:01:17.053 **** 2026-02-04 01:05:59.825173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.825181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825199 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.825206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.825211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825230 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.825238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 01:05:59.825243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 01:05:59.825286 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.825293 | orchestrator | 2026-02-04 01:05:59.825299 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-04 01:05:59.825305 | orchestrator | Wednesday 04 February 2026 01:04:27 +0000 (0:00:00.566) 0:01:17.619 **** 2026-02-04 01:05:59.825311 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.825317 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.825324 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.825332 | orchestrator | 2026-02-04 01:05:59.825341 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-04 01:05:59.825347 | orchestrator | Wednesday 04 February 2026 01:04:27 +0000 (0:00:00.278) 0:01:17.898 **** 2026-02-04 01:05:59.825353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.825364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.825374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 01:05:59.825387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 01:05:59.825437 | orchestrator | 2026-02-04 01:05:59.825441 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 01:05:59.825445 | orchestrator | Wednesday 04 February 2026 01:04:30 +0000 (0:00:02.810) 0:01:20.709 **** 2026-02-04 01:05:59.825449 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.825453 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:05:59.825457 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:05:59.825461 | orchestrator | 2026-02-04 01:05:59.825464 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-04 01:05:59.825468 | orchestrator | Wednesday 04 February 2026 01:04:31 +0000 (0:00:00.396) 0:01:21.105 **** 2026-02-04 01:05:59.825472 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825476 | orchestrator | 2026-02-04 01:05:59.825480 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-04 01:05:59.825484 | orchestrator | Wednesday 04 February 2026 01:04:33 +0000 (0:00:02.178) 0:01:23.284 **** 2026-02-04 01:05:59.825490 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825494 | orchestrator | 2026-02-04 01:05:59.825498 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-04 01:05:59.825502 | orchestrator | Wednesday 04 February 2026 01:04:35 +0000 (0:00:02.228) 0:01:25.513 **** 2026-02-04 01:05:59.825506 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825509 | orchestrator | 2026-02-04 01:05:59.825513 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 01:05:59.825517 | orchestrator | Wednesday 04 February 2026 01:04:56 +0000 (0:00:21.174) 0:01:46.688 **** 2026-02-04 01:05:59.825521 | orchestrator | 2026-02-04 01:05:59.825525 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 01:05:59.825533 | orchestrator | Wednesday 04 February 2026 01:04:56 +0000 (0:00:00.064) 0:01:46.752 **** 2026-02-04 01:05:59.825537 | orchestrator | 2026-02-04 01:05:59.825541 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 01:05:59.825545 | orchestrator | Wednesday 04 February 2026 01:04:56 +0000 (0:00:00.067) 0:01:46.820 **** 2026-02-04 01:05:59.825548 | orchestrator | 2026-02-04 01:05:59.825552 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-04 01:05:59.825556 | orchestrator | Wednesday 04 February 2026 01:04:56 +0000 (0:00:00.067) 0:01:46.887 **** 2026-02-04 01:05:59.825560 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825564 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.825568 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.825572 | orchestrator | 2026-02-04 01:05:59.825575 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-04 01:05:59.825579 | orchestrator | Wednesday 04 February 2026 01:05:23 +0000 (0:00:26.915) 0:02:13.803 **** 2026-02-04 01:05:59.825585 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825589 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.825593 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.825597 | orchestrator | 2026-02-04 01:05:59.825601 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-04 01:05:59.825605 | orchestrator | Wednesday 04 February 2026 01:05:29 +0000 (0:00:05.215) 0:02:19.018 **** 2026-02-04 01:05:59.825608 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825612 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.825616 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.825620 | orchestrator | 2026-02-04 01:05:59.825624 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-04 01:05:59.825628 | orchestrator | Wednesday 04 February 2026 01:05:50 +0000 (0:00:21.676) 0:02:40.694 **** 2026-02-04 01:05:59.825631 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:05:59.825635 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:05:59.825639 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:05:59.825643 | orchestrator | 2026-02-04 01:05:59.825647 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-04 01:05:59.825651 | orchestrator | Wednesday 04 February 2026 01:05:56 +0000 (0:00:05.665) 0:02:46.360 **** 2026-02-04 01:05:59.825655 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:05:59.825659 | orchestrator | 2026-02-04 01:05:59.825662 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:05:59.825667 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 01:05:59.825672 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:05:59.825676 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:05:59.825679 | orchestrator | 2026-02-04 01:05:59.825683 | orchestrator | 2026-02-04 01:05:59.825687 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:05:59.825691 | orchestrator | Wednesday 04 February 2026 01:05:56 +0000 (0:00:00.258) 0:02:46.619 **** 2026-02-04 01:05:59.825695 | orchestrator | =============================================================================== 2026-02-04 01:05:59.825699 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.92s 2026-02-04 01:05:59.825702 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.68s 2026-02-04 01:05:59.825706 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 21.17s 2026-02-04 01:05:59.825710 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 14.02s 2026-02-04 01:05:59.825714 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.45s 2026-02-04 01:05:59.825720 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.85s 2026-02-04 01:05:59.825724 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.67s 2026-02-04 01:05:59.825728 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.22s 2026-02-04 01:05:59.825732 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.77s 2026-02-04 01:05:59.825736 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.07s 2026-02-04 01:05:59.825739 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.02s 2026-02-04 01:05:59.825743 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.60s 2026-02-04 01:05:59.825747 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.49s 2026-02-04 01:05:59.825751 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.45s 2026-02-04 01:05:59.825755 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.40s 2026-02-04 01:05:59.825761 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.88s 2026-02-04 01:05:59.825765 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.81s 2026-02-04 01:05:59.825769 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.39s 2026-02-04 01:05:59.825773 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.23s 2026-02-04 01:05:59.825777 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.18s 2026-02-04 01:05:59.825781 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:05:59.827122 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:05:59.829527 | orchestrator | 2026-02-04 01:05:59 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:05:59.829593 | orchestrator | 2026-02-04 01:05:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:02.872040 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:02.874263 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:02.876753 | orchestrator | 2026-02-04 01:06:02 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:02.877536 | orchestrator | 2026-02-04 01:06:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:05.915559 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:05.916827 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:05.918738 | orchestrator | 2026-02-04 01:06:05 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:05.918776 | orchestrator | 2026-02-04 01:06:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:08.958275 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:08.958793 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:08.959449 | orchestrator | 2026-02-04 01:06:08 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:08.959475 | orchestrator | 2026-02-04 01:06:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:12.002766 | orchestrator | 2026-02-04 01:06:12 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:12.003991 | orchestrator | 2026-02-04 01:06:12 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:12.005583 | orchestrator | 2026-02-04 01:06:12 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:12.005614 | orchestrator | 2026-02-04 01:06:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:15.051333 | orchestrator | 2026-02-04 01:06:15 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:15.051441 | orchestrator | 2026-02-04 01:06:15 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:15.053659 | orchestrator | 2026-02-04 01:06:15 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:15.053713 | orchestrator | 2026-02-04 01:06:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:18.088440 | orchestrator | 2026-02-04 01:06:18 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:18.090981 | orchestrator | 2026-02-04 01:06:18 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:18.092966 | orchestrator | 2026-02-04 01:06:18 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:18.093018 | orchestrator | 2026-02-04 01:06:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:21.130104 | orchestrator | 2026-02-04 01:06:21 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:21.132172 | orchestrator | 2026-02-04 01:06:21 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:21.134545 | orchestrator | 2026-02-04 01:06:21 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:21.134734 | orchestrator | 2026-02-04 01:06:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:24.168378 | orchestrator | 2026-02-04 01:06:24 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:24.170096 | orchestrator | 2026-02-04 01:06:24 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:24.172093 | orchestrator | 2026-02-04 01:06:24 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:24.172177 | orchestrator | 2026-02-04 01:06:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:27.214215 | orchestrator | 2026-02-04 01:06:27 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:27.215836 | orchestrator | 2026-02-04 01:06:27 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:27.217670 | orchestrator | 2026-02-04 01:06:27 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:27.217725 | orchestrator | 2026-02-04 01:06:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:30.254098 | orchestrator | 2026-02-04 01:06:30 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:30.254168 | orchestrator | 2026-02-04 01:06:30 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:30.254745 | orchestrator | 2026-02-04 01:06:30 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:30.254803 | orchestrator | 2026-02-04 01:06:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:33.288205 | orchestrator | 2026-02-04 01:06:33 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:33.288354 | orchestrator | 2026-02-04 01:06:33 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:33.288390 | orchestrator | 2026-02-04 01:06:33 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:33.288396 | orchestrator | 2026-02-04 01:06:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:36.331274 | orchestrator | 2026-02-04 01:06:36 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:36.333471 | orchestrator | 2026-02-04 01:06:36 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:36.335293 | orchestrator | 2026-02-04 01:06:36 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:36.335339 | orchestrator | 2026-02-04 01:06:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:39.374055 | orchestrator | 2026-02-04 01:06:39 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:39.375787 | orchestrator | 2026-02-04 01:06:39 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:39.377720 | orchestrator | 2026-02-04 01:06:39 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:39.377881 | orchestrator | 2026-02-04 01:06:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:42.414549 | orchestrator | 2026-02-04 01:06:42 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:42.416130 | orchestrator | 2026-02-04 01:06:42 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:42.417724 | orchestrator | 2026-02-04 01:06:42 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:42.417881 | orchestrator | 2026-02-04 01:06:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:45.457567 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:45.459088 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:45.460090 | orchestrator | 2026-02-04 01:06:45 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:45.460134 | orchestrator | 2026-02-04 01:06:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:48.501491 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:48.503055 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:48.506130 | orchestrator | 2026-02-04 01:06:48 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:48.506180 | orchestrator | 2026-02-04 01:06:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:51.547623 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:51.549172 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:51.550697 | orchestrator | 2026-02-04 01:06:51 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:51.550755 | orchestrator | 2026-02-04 01:06:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:54.594724 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:54.596156 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:54.597521 | orchestrator | 2026-02-04 01:06:54 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:54.597587 | orchestrator | 2026-02-04 01:06:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:06:57.635385 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:06:57.636915 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:06:57.638703 | orchestrator | 2026-02-04 01:06:57 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:06:57.638792 | orchestrator | 2026-02-04 01:06:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:00.682333 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:00.684263 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:00.687112 | orchestrator | 2026-02-04 01:07:00 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:00.687191 | orchestrator | 2026-02-04 01:07:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:03.734535 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:03.736126 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:03.737410 | orchestrator | 2026-02-04 01:07:03 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:03.737527 | orchestrator | 2026-02-04 01:07:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:06.773622 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:06.775697 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:06.777357 | orchestrator | 2026-02-04 01:07:06 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:06.777657 | orchestrator | 2026-02-04 01:07:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:09.818436 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:09.819159 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:09.820620 | orchestrator | 2026-02-04 01:07:09 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:09.820654 | orchestrator | 2026-02-04 01:07:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:12.864164 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:12.865399 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:12.866997 | orchestrator | 2026-02-04 01:07:12 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:12.867068 | orchestrator | 2026-02-04 01:07:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:15.912907 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:15.913244 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:15.916836 | orchestrator | 2026-02-04 01:07:15 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:15.916918 | orchestrator | 2026-02-04 01:07:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:18.958899 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:18.960440 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:18.962164 | orchestrator | 2026-02-04 01:07:18 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:18.962203 | orchestrator | 2026-02-04 01:07:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:22.007081 | orchestrator | 2026-02-04 01:07:22 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:22.008179 | orchestrator | 2026-02-04 01:07:22 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:22.009044 | orchestrator | 2026-02-04 01:07:22 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:22.009085 | orchestrator | 2026-02-04 01:07:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:25.048742 | orchestrator | 2026-02-04 01:07:25 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:25.050731 | orchestrator | 2026-02-04 01:07:25 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:25.053262 | orchestrator | 2026-02-04 01:07:25 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:25.053367 | orchestrator | 2026-02-04 01:07:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:28.090870 | orchestrator | 2026-02-04 01:07:28 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:28.091623 | orchestrator | 2026-02-04 01:07:28 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:28.094333 | orchestrator | 2026-02-04 01:07:28 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:28.094862 | orchestrator | 2026-02-04 01:07:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:31.132554 | orchestrator | 2026-02-04 01:07:31 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:31.133595 | orchestrator | 2026-02-04 01:07:31 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:31.134344 | orchestrator | 2026-02-04 01:07:31 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:31.134371 | orchestrator | 2026-02-04 01:07:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:34.166793 | orchestrator | 2026-02-04 01:07:34 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:34.167108 | orchestrator | 2026-02-04 01:07:34 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:34.168179 | orchestrator | 2026-02-04 01:07:34 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:34.168210 | orchestrator | 2026-02-04 01:07:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:37.210439 | orchestrator | 2026-02-04 01:07:37 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:37.211560 | orchestrator | 2026-02-04 01:07:37 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:37.213118 | orchestrator | 2026-02-04 01:07:37 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:37.213158 | orchestrator | 2026-02-04 01:07:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:40.246958 | orchestrator | 2026-02-04 01:07:40 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:40.247744 | orchestrator | 2026-02-04 01:07:40 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:40.248906 | orchestrator | 2026-02-04 01:07:40 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:40.248960 | orchestrator | 2026-02-04 01:07:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:43.289147 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:43.290975 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:43.292174 | orchestrator | 2026-02-04 01:07:43 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:43.292283 | orchestrator | 2026-02-04 01:07:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:46.324860 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:46.326794 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:46.328943 | orchestrator | 2026-02-04 01:07:46 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:46.329010 | orchestrator | 2026-02-04 01:07:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:49.364411 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:49.366133 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:49.369419 | orchestrator | 2026-02-04 01:07:49 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:49.369501 | orchestrator | 2026-02-04 01:07:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:52.409602 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:52.413093 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:52.415249 | orchestrator | 2026-02-04 01:07:52 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:52.415311 | orchestrator | 2026-02-04 01:07:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:55.455586 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:55.457114 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:55.459278 | orchestrator | 2026-02-04 01:07:55 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:55.459327 | orchestrator | 2026-02-04 01:07:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:07:58.500649 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:07:58.501182 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:07:58.502719 | orchestrator | 2026-02-04 01:07:58 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:07:58.502762 | orchestrator | 2026-02-04 01:07:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:01.546319 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:08:01.548263 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:01.550310 | orchestrator | 2026-02-04 01:08:01 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:01.550360 | orchestrator | 2026-02-04 01:08:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:04.582281 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state STARTED 2026-02-04 01:08:04.583411 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:04.584120 | orchestrator | 2026-02-04 01:08:04 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:04.584156 | orchestrator | 2026-02-04 01:08:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:07.615988 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task 81fef954-8c97-4b1d-9a1e-92d8536a190d is in state SUCCESS 2026-02-04 01:08:07.617254 | orchestrator | 2026-02-04 01:08:07.617306 | orchestrator | 2026-02-04 01:08:07.617317 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:08:07.617324 | orchestrator | 2026-02-04 01:08:07.617348 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:08:07.617357 | orchestrator | Wednesday 04 February 2026 01:05:36 +0000 (0:00:00.225) 0:00:00.225 **** 2026-02-04 01:08:07.617364 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:08:07.617372 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:08:07.617395 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:08:07.617400 | orchestrator | 2026-02-04 01:08:07.617404 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:08:07.617408 | orchestrator | Wednesday 04 February 2026 01:05:37 +0000 (0:00:00.256) 0:00:00.481 **** 2026-02-04 01:08:07.617412 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-04 01:08:07.617423 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-04 01:08:07.617427 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-04 01:08:07.617434 | orchestrator | 2026-02-04 01:08:07.617438 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-04 01:08:07.617442 | orchestrator | 2026-02-04 01:08:07.617446 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-04 01:08:07.617450 | orchestrator | Wednesday 04 February 2026 01:05:37 +0000 (0:00:00.349) 0:00:00.831 **** 2026-02-04 01:08:07.617454 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:08:07.617458 | orchestrator | 2026-02-04 01:08:07.617462 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-04 01:08:07.617466 | orchestrator | Wednesday 04 February 2026 01:05:37 +0000 (0:00:00.449) 0:00:01.280 **** 2026-02-04 01:08:07.617471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617505 | orchestrator | 2026-02-04 01:08:07.617509 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-04 01:08:07.617513 | orchestrator | Wednesday 04 February 2026 01:05:38 +0000 (0:00:00.653) 0:00:01.933 **** 2026-02-04 01:08:07.617517 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-04 01:08:07.617521 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-04 01:08:07.617526 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:08:07.617533 | orchestrator | 2026-02-04 01:08:07.617539 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-04 01:08:07.617546 | orchestrator | Wednesday 04 February 2026 01:05:39 +0000 (0:00:00.727) 0:00:02.661 **** 2026-02-04 01:08:07.617552 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:08:07.617559 | orchestrator | 2026-02-04 01:08:07.617565 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-04 01:08:07.617571 | orchestrator | Wednesday 04 February 2026 01:05:39 +0000 (0:00:00.578) 0:00:03.240 **** 2026-02-04 01:08:07.617611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617646 | orchestrator | 2026-02-04 01:08:07.617657 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-04 01:08:07.617664 | orchestrator | Wednesday 04 February 2026 01:05:41 +0000 (0:00:01.166) 0:00:04.407 **** 2026-02-04 01:08:07.617669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:08:07.617673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:08:07.617677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.617681 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.617709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:08:07.617714 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.617718 | orchestrator | 2026-02-04 01:08:07.617721 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-04 01:08:07.617725 | orchestrator | Wednesday 04 February 2026 01:05:41 +0000 (0:00:00.321) 0:00:04.729 **** 2026-02-04 01:08:07.617729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:08:07.617734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:08:07.617741 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.617745 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.617751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 01:08:07.617786 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.617790 | orchestrator | 2026-02-04 01:08:07.617794 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-04 01:08:07.617798 | orchestrator | Wednesday 04 February 2026 01:05:42 +0000 (0:00:00.651) 0:00:05.380 **** 2026-02-04 01:08:07.617802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617818 | orchestrator | 2026-02-04 01:08:07.617821 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-04 01:08:07.617825 | orchestrator | Wednesday 04 February 2026 01:05:43 +0000 (0:00:01.107) 0:00:06.488 **** 2026-02-04 01:08:07.617832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.617847 | orchestrator | 2026-02-04 01:08:07.617851 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-04 01:08:07.617855 | orchestrator | Wednesday 04 February 2026 01:05:44 +0000 (0:00:01.226) 0:00:07.714 **** 2026-02-04 01:08:07.617859 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.617862 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.617866 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.617870 | orchestrator | 2026-02-04 01:08:07.617874 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-04 01:08:07.617878 | orchestrator | Wednesday 04 February 2026 01:05:44 +0000 (0:00:00.366) 0:00:08.081 **** 2026-02-04 01:08:07.617882 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 01:08:07.617886 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 01:08:07.617890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 01:08:07.617894 | orchestrator | 2026-02-04 01:08:07.617898 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-04 01:08:07.617902 | orchestrator | Wednesday 04 February 2026 01:05:45 +0000 (0:00:01.082) 0:00:09.163 **** 2026-02-04 01:08:07.617910 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 01:08:07.617917 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 01:08:07.617922 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 01:08:07.617926 | orchestrator | 2026-02-04 01:08:07.617929 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-04 01:08:07.617936 | orchestrator | Wednesday 04 February 2026 01:05:47 +0000 (0:00:01.383) 0:00:10.547 **** 2026-02-04 01:08:07.617940 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:08:07.617944 | orchestrator | 2026-02-04 01:08:07.617948 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-04 01:08:07.617952 | orchestrator | Wednesday 04 February 2026 01:05:47 +0000 (0:00:00.681) 0:00:11.228 **** 2026-02-04 01:08:07.617955 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-04 01:08:07.617959 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-04 01:08:07.617963 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:08:07.617967 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:08:07.617971 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:08:07.617975 | orchestrator | 2026-02-04 01:08:07.617983 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-04 01:08:07.617987 | orchestrator | Wednesday 04 February 2026 01:05:48 +0000 (0:00:00.750) 0:00:11.979 **** 2026-02-04 01:08:07.617990 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.617994 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.617998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.618308 | orchestrator | 2026-02-04 01:08:07.618322 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-04 01:08:07.618331 | orchestrator | Wednesday 04 February 2026 01:05:49 +0000 (0:00:00.476) 0:00:12.456 **** 2026-02-04 01:08:07.618337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083962, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2950413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083962, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2950413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1083962, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2950413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084103, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.314275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084103, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.314275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1084103, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.314275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083973, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2985559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083973, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2985559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1083973, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2985559, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084106, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3161874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084106, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3161874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1084106, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3161874, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084063, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.309356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084063, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.309356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1084063, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.309356, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084089, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3130348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084089, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3130348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1084089, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3130348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083960, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2944124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083960, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2944124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1083960, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2944124, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083967, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2960415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083967, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2960415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1083967, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2960415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083977, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3062532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083977, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3062532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1083977, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3062532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084079, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3109322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084079, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3109322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1084079, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3109322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084100, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.314275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084100, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.314275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1084100, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.314275, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083970, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2960415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083970, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2960415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1083970, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.2960415, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084085, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3124223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084085, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3124223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1084085, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3124223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084067, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3109322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084067, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3109322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1084067, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3109322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084055, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.309036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084055, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.309036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1084055, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.309036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084047, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3081975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084047, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3081975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1084047, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3081975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084080, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3120542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084080, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3120542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1084080, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3120542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084029, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.307705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084029, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.307705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1084029, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.307705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084096, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3133821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084096, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3133821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084322, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3548858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084322, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3548858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1084096, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3133821, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084144, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3283052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084144, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3283052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1084322, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3548858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084128, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3189218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084128, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3189218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1084144, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3283052, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084186, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3311377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084186, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3311377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1084128, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3189218, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084113, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.316843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084113, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.316843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1084186, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3311377, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084274, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3466105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084274, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3466105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1084113, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.316843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084232, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3421369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084232, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3421369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1084274, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3466105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084277, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3466105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084277, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3466105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084316, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3532705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084316, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3532705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1084232, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3421369, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084270, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.345165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084270, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.345165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1084277, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3466105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084173, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3295138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084173, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3295138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1084316, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3532705, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084139, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.322013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084139, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.322013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1084270, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.345165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084171, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.328989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084171, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.328989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1084173, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3295138, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084133, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3213859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084133, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3213859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1084139, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.322013, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1084171, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.328989, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084180, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3308535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.618995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084180, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3308535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1084133, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3213859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084293, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3528738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084293, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3528738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1084180, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3308535, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084282, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.349629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084282, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.349629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1084293, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3528738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084117, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.317434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084117, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.317434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1084282, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.349629, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084120, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3182018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084120, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3182018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1084117, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.317434, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084260, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3430982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084260, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3430982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1084120, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3182018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1084260, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3430982, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084278, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3474653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084278, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3474653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1084278, 'dev': 129, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770164227.3474653, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 01:08:07.619229 | orchestrator | 2026-02-04 01:08:07.619233 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-04 01:08:07.619237 | orchestrator | Wednesday 04 February 2026 01:06:24 +0000 (0:00:35.593) 0:00:48.050 **** 2026-02-04 01:08:07.619243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.619247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.619251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 01:08:07.619255 | orchestrator | 2026-02-04 01:08:07.619259 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-04 01:08:07.619263 | orchestrator | Wednesday 04 February 2026 01:06:25 +0000 (0:00:01.090) 0:00:49.140 **** 2026-02-04 01:08:07.619267 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:08:07.619271 | orchestrator | 2026-02-04 01:08:07.619278 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-04 01:08:07.619284 | orchestrator | Wednesday 04 February 2026 01:06:27 +0000 (0:00:02.046) 0:00:51.186 **** 2026-02-04 01:08:07.619287 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:08:07.619291 | orchestrator | 2026-02-04 01:08:07.619295 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 01:08:07.619299 | orchestrator | Wednesday 04 February 2026 01:06:29 +0000 (0:00:01.986) 0:00:53.173 **** 2026-02-04 01:08:07.619303 | orchestrator | 2026-02-04 01:08:07.619307 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 01:08:07.619311 | orchestrator | Wednesday 04 February 2026 01:06:29 +0000 (0:00:00.071) 0:00:53.244 **** 2026-02-04 01:08:07.619315 | orchestrator | 2026-02-04 01:08:07.619318 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 01:08:07.619322 | orchestrator | Wednesday 04 February 2026 01:06:29 +0000 (0:00:00.060) 0:00:53.305 **** 2026-02-04 01:08:07.619326 | orchestrator | 2026-02-04 01:08:07.619330 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-04 01:08:07.619335 | orchestrator | Wednesday 04 February 2026 01:06:30 +0000 (0:00:00.174) 0:00:53.480 **** 2026-02-04 01:08:07.619341 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.619348 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.619353 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:08:07.619367 | orchestrator | 2026-02-04 01:08:07.619373 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-04 01:08:07.619404 | orchestrator | Wednesday 04 February 2026 01:06:31 +0000 (0:00:01.757) 0:00:55.237 **** 2026-02-04 01:08:07.619408 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.619412 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.619416 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-04 01:08:07.619421 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-04 01:08:07.619425 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-04 01:08:07.619429 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-04 01:08:07.619433 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (8 retries left). 2026-02-04 01:08:07.619437 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:08:07.619441 | orchestrator | 2026-02-04 01:08:07.619449 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-04 01:08:07.619453 | orchestrator | Wednesday 04 February 2026 01:07:35 +0000 (0:01:03.095) 0:01:58.332 **** 2026-02-04 01:08:07.619457 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.619461 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:08:07.619465 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:08:07.619469 | orchestrator | 2026-02-04 01:08:07.619473 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-04 01:08:07.619479 | orchestrator | Wednesday 04 February 2026 01:07:59 +0000 (0:00:24.255) 0:02:22.588 **** 2026-02-04 01:08:07.619483 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:08:07.619487 | orchestrator | 2026-02-04 01:08:07.619491 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-04 01:08:07.619495 | orchestrator | Wednesday 04 February 2026 01:08:01 +0000 (0:00:02.305) 0:02:24.893 **** 2026-02-04 01:08:07.619498 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.619502 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:08:07.619506 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:08:07.619510 | orchestrator | 2026-02-04 01:08:07.619514 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-04 01:08:07.619518 | orchestrator | Wednesday 04 February 2026 01:08:02 +0000 (0:00:00.465) 0:02:25.359 **** 2026-02-04 01:08:07.619523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-04 01:08:07.619531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-04 01:08:07.619536 | orchestrator | 2026-02-04 01:08:07.619539 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-04 01:08:07.619543 | orchestrator | Wednesday 04 February 2026 01:08:04 +0000 (0:00:02.566) 0:02:27.925 **** 2026-02-04 01:08:07.619547 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:08:07.619551 | orchestrator | 2026-02-04 01:08:07.619555 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:08:07.619559 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:08:07.619564 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:08:07.619568 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:08:07.619572 | orchestrator | 2026-02-04 01:08:07.619576 | orchestrator | 2026-02-04 01:08:07.619580 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:08:07.619586 | orchestrator | Wednesday 04 February 2026 01:08:04 +0000 (0:00:00.220) 0:02:28.146 **** 2026-02-04 01:08:07.619590 | orchestrator | =============================================================================== 2026-02-04 01:08:07.619594 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 63.10s 2026-02-04 01:08:07.619598 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 35.59s 2026-02-04 01:08:07.619602 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.26s 2026-02-04 01:08:07.619606 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.57s 2026-02-04 01:08:07.619610 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.31s 2026-02-04 01:08:07.619614 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.05s 2026-02-04 01:08:07.619617 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 1.99s 2026-02-04 01:08:07.619621 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.76s 2026-02-04 01:08:07.619625 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.38s 2026-02-04 01:08:07.619629 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.23s 2026-02-04 01:08:07.619633 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.17s 2026-02-04 01:08:07.619637 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.11s 2026-02-04 01:08:07.619640 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.09s 2026-02-04 01:08:07.619644 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.08s 2026-02-04 01:08:07.619648 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2026-02-04 01:08:07.619652 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.73s 2026-02-04 01:08:07.619656 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.68s 2026-02-04 01:08:07.619660 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.65s 2026-02-04 01:08:07.619663 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.65s 2026-02-04 01:08:07.619670 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.58s 2026-02-04 01:08:07.619674 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:07.619715 | orchestrator | 2026-02-04 01:08:07 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:07.619723 | orchestrator | 2026-02-04 01:08:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:10.652179 | orchestrator | 2026-02-04 01:08:10 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:10.652268 | orchestrator | 2026-02-04 01:08:10 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:10.652276 | orchestrator | 2026-02-04 01:08:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:13.688251 | orchestrator | 2026-02-04 01:08:13 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:13.689664 | orchestrator | 2026-02-04 01:08:13 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:13.689714 | orchestrator | 2026-02-04 01:08:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:16.725843 | orchestrator | 2026-02-04 01:08:16 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:16.727203 | orchestrator | 2026-02-04 01:08:16 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:16.727263 | orchestrator | 2026-02-04 01:08:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:19.784857 | orchestrator | 2026-02-04 01:08:19 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:19.786603 | orchestrator | 2026-02-04 01:08:19 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:19.786769 | orchestrator | 2026-02-04 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:22.832145 | orchestrator | 2026-02-04 01:08:22 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:22.834385 | orchestrator | 2026-02-04 01:08:22 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:22.834434 | orchestrator | 2026-02-04 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:25.879341 | orchestrator | 2026-02-04 01:08:25 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:25.881238 | orchestrator | 2026-02-04 01:08:25 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:25.881293 | orchestrator | 2026-02-04 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:28.923802 | orchestrator | 2026-02-04 01:08:28 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:28.925710 | orchestrator | 2026-02-04 01:08:28 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:28.925758 | orchestrator | 2026-02-04 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:31.964595 | orchestrator | 2026-02-04 01:08:31 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:31.964650 | orchestrator | 2026-02-04 01:08:31 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:31.964657 | orchestrator | 2026-02-04 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:35.009391 | orchestrator | 2026-02-04 01:08:35 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:35.009468 | orchestrator | 2026-02-04 01:08:35 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:35.009477 | orchestrator | 2026-02-04 01:08:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:38.058132 | orchestrator | 2026-02-04 01:08:38 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state STARTED 2026-02-04 01:08:38.060658 | orchestrator | 2026-02-04 01:08:38 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:38.060699 | orchestrator | 2026-02-04 01:08:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:41.092841 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task 7ee7721f-a402-4710-9819-42772fd6d2f6 is in state SUCCESS 2026-02-04 01:08:41.093797 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:41.094404 | orchestrator | 2026-02-04 01:08:41 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:41.094527 | orchestrator | 2026-02-04 01:08:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:44.137853 | orchestrator | 2026-02-04 01:08:44 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:44.140400 | orchestrator | 2026-02-04 01:08:44 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:44.140468 | orchestrator | 2026-02-04 01:08:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:47.174434 | orchestrator | 2026-02-04 01:08:47 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:47.174638 | orchestrator | 2026-02-04 01:08:47 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:47.174680 | orchestrator | 2026-02-04 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:50.214948 | orchestrator | 2026-02-04 01:08:50 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:50.215330 | orchestrator | 2026-02-04 01:08:50 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:50.215345 | orchestrator | 2026-02-04 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:53.248274 | orchestrator | 2026-02-04 01:08:53 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:53.248533 | orchestrator | 2026-02-04 01:08:53 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:53.248576 | orchestrator | 2026-02-04 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:56.286888 | orchestrator | 2026-02-04 01:08:56 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:56.287525 | orchestrator | 2026-02-04 01:08:56 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:56.287668 | orchestrator | 2026-02-04 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:08:59.313433 | orchestrator | 2026-02-04 01:08:59 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:08:59.315312 | orchestrator | 2026-02-04 01:08:59 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:08:59.315356 | orchestrator | 2026-02-04 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:02.359634 | orchestrator | 2026-02-04 01:09:02 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:02.361462 | orchestrator | 2026-02-04 01:09:02 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:02.361532 | orchestrator | 2026-02-04 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:05.398380 | orchestrator | 2026-02-04 01:09:05 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:05.400068 | orchestrator | 2026-02-04 01:09:05 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:05.400465 | orchestrator | 2026-02-04 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:08.430405 | orchestrator | 2026-02-04 01:09:08 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:08.432337 | orchestrator | 2026-02-04 01:09:08 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:08.432387 | orchestrator | 2026-02-04 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:11.478726 | orchestrator | 2026-02-04 01:09:11 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:11.479945 | orchestrator | 2026-02-04 01:09:11 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:11.479993 | orchestrator | 2026-02-04 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:14.514492 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:14.516735 | orchestrator | 2026-02-04 01:09:14 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:14.516786 | orchestrator | 2026-02-04 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:17.544784 | orchestrator | 2026-02-04 01:09:17 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:17.545319 | orchestrator | 2026-02-04 01:09:17 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:17.545339 | orchestrator | 2026-02-04 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:20.617187 | orchestrator | 2026-02-04 01:09:20 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:20.617962 | orchestrator | 2026-02-04 01:09:20 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:20.617988 | orchestrator | 2026-02-04 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:23.647219 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:23.648795 | orchestrator | 2026-02-04 01:09:23 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:23.648925 | orchestrator | 2026-02-04 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:26.683816 | orchestrator | 2026-02-04 01:09:26 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:26.684072 | orchestrator | 2026-02-04 01:09:26 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:26.684144 | orchestrator | 2026-02-04 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:29.727906 | orchestrator | 2026-02-04 01:09:29 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:29.729988 | orchestrator | 2026-02-04 01:09:29 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:29.730068 | orchestrator | 2026-02-04 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:32.762121 | orchestrator | 2026-02-04 01:09:32 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:32.764992 | orchestrator | 2026-02-04 01:09:32 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:32.765486 | orchestrator | 2026-02-04 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:35.797784 | orchestrator | 2026-02-04 01:09:35 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:35.798628 | orchestrator | 2026-02-04 01:09:35 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:35.798704 | orchestrator | 2026-02-04 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:38.827259 | orchestrator | 2026-02-04 01:09:38 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:38.828847 | orchestrator | 2026-02-04 01:09:38 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:38.828921 | orchestrator | 2026-02-04 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:41.866958 | orchestrator | 2026-02-04 01:09:41 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:41.870988 | orchestrator | 2026-02-04 01:09:41 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:41.871073 | orchestrator | 2026-02-04 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:44.908934 | orchestrator | 2026-02-04 01:09:44 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:44.909653 | orchestrator | 2026-02-04 01:09:44 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:44.909683 | orchestrator | 2026-02-04 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:47.939705 | orchestrator | 2026-02-04 01:09:47 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:47.942121 | orchestrator | 2026-02-04 01:09:47 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:47.942229 | orchestrator | 2026-02-04 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:50.987162 | orchestrator | 2026-02-04 01:09:50 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:50.988640 | orchestrator | 2026-02-04 01:09:50 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:50.989867 | orchestrator | 2026-02-04 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:54.031130 | orchestrator | 2026-02-04 01:09:54 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:54.031218 | orchestrator | 2026-02-04 01:09:54 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:54.031281 | orchestrator | 2026-02-04 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:09:57.072910 | orchestrator | 2026-02-04 01:09:57 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:09:57.074210 | orchestrator | 2026-02-04 01:09:57 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:09:57.074381 | orchestrator | 2026-02-04 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:00.111800 | orchestrator | 2026-02-04 01:10:00 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:00.113912 | orchestrator | 2026-02-04 01:10:00 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:00.113961 | orchestrator | 2026-02-04 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:03.154927 | orchestrator | 2026-02-04 01:10:03 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:03.156084 | orchestrator | 2026-02-04 01:10:03 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:03.156144 | orchestrator | 2026-02-04 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:06.203807 | orchestrator | 2026-02-04 01:10:06 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:06.205267 | orchestrator | 2026-02-04 01:10:06 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:06.205433 | orchestrator | 2026-02-04 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:09.248389 | orchestrator | 2026-02-04 01:10:09 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:09.249815 | orchestrator | 2026-02-04 01:10:09 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:09.249841 | orchestrator | 2026-02-04 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:12.293403 | orchestrator | 2026-02-04 01:10:12 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:12.295134 | orchestrator | 2026-02-04 01:10:12 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:12.295246 | orchestrator | 2026-02-04 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:15.347925 | orchestrator | 2026-02-04 01:10:15 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:15.348883 | orchestrator | 2026-02-04 01:10:15 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:15.348984 | orchestrator | 2026-02-04 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:18.402188 | orchestrator | 2026-02-04 01:10:18 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:18.404088 | orchestrator | 2026-02-04 01:10:18 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:18.404131 | orchestrator | 2026-02-04 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:21.448104 | orchestrator | 2026-02-04 01:10:21 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:21.450215 | orchestrator | 2026-02-04 01:10:21 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:21.450299 | orchestrator | 2026-02-04 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:24.491190 | orchestrator | 2026-02-04 01:10:24 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:24.492864 | orchestrator | 2026-02-04 01:10:24 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:24.492917 | orchestrator | 2026-02-04 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:27.537907 | orchestrator | 2026-02-04 01:10:27 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:27.540350 | orchestrator | 2026-02-04 01:10:27 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:27.540384 | orchestrator | 2026-02-04 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:30.581774 | orchestrator | 2026-02-04 01:10:30 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:30.583487 | orchestrator | 2026-02-04 01:10:30 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:30.583535 | orchestrator | 2026-02-04 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:33.627876 | orchestrator | 2026-02-04 01:10:33 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:33.629079 | orchestrator | 2026-02-04 01:10:33 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:33.629117 | orchestrator | 2026-02-04 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:36.661015 | orchestrator | 2026-02-04 01:10:36 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:36.662507 | orchestrator | 2026-02-04 01:10:36 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:36.662574 | orchestrator | 2026-02-04 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:39.701114 | orchestrator | 2026-02-04 01:10:39 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:39.702837 | orchestrator | 2026-02-04 01:10:39 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:39.703065 | orchestrator | 2026-02-04 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:42.747117 | orchestrator | 2026-02-04 01:10:42 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:42.748804 | orchestrator | 2026-02-04 01:10:42 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:42.749073 | orchestrator | 2026-02-04 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:45.783084 | orchestrator | 2026-02-04 01:10:45 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:45.785444 | orchestrator | 2026-02-04 01:10:45 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:45.785483 | orchestrator | 2026-02-04 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:48.827832 | orchestrator | 2026-02-04 01:10:48 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:48.830286 | orchestrator | 2026-02-04 01:10:48 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:48.830333 | orchestrator | 2026-02-04 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:51.866891 | orchestrator | 2026-02-04 01:10:51 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:51.868612 | orchestrator | 2026-02-04 01:10:51 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:51.868650 | orchestrator | 2026-02-04 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:54.907829 | orchestrator | 2026-02-04 01:10:54 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:54.908924 | orchestrator | 2026-02-04 01:10:54 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:54.908962 | orchestrator | 2026-02-04 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:10:57.946882 | orchestrator | 2026-02-04 01:10:57 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:10:57.947059 | orchestrator | 2026-02-04 01:10:57 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:10:57.947077 | orchestrator | 2026-02-04 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:00.985652 | orchestrator | 2026-02-04 01:11:00 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:00.986972 | orchestrator | 2026-02-04 01:11:00 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:00.987029 | orchestrator | 2026-02-04 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:04.055181 | orchestrator | 2026-02-04 01:11:04 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:04.057449 | orchestrator | 2026-02-04 01:11:04 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:04.058783 | orchestrator | 2026-02-04 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:07.097539 | orchestrator | 2026-02-04 01:11:07 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:07.098891 | orchestrator | 2026-02-04 01:11:07 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:07.099093 | orchestrator | 2026-02-04 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:10.138571 | orchestrator | 2026-02-04 01:11:10 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:10.140234 | orchestrator | 2026-02-04 01:11:10 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:10.140253 | orchestrator | 2026-02-04 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:13.181269 | orchestrator | 2026-02-04 01:11:13 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:13.182821 | orchestrator | 2026-02-04 01:11:13 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:13.182874 | orchestrator | 2026-02-04 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:16.225089 | orchestrator | 2026-02-04 01:11:16 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:16.227223 | orchestrator | 2026-02-04 01:11:16 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:16.227278 | orchestrator | 2026-02-04 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:19.262797 | orchestrator | 2026-02-04 01:11:19 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:19.263004 | orchestrator | 2026-02-04 01:11:19 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:19.263518 | orchestrator | 2026-02-04 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:22.286708 | orchestrator | 2026-02-04 01:11:22 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:22.286844 | orchestrator | 2026-02-04 01:11:22 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:22.286856 | orchestrator | 2026-02-04 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:25.321478 | orchestrator | 2026-02-04 01:11:25 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:25.323773 | orchestrator | 2026-02-04 01:11:25 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:25.323822 | orchestrator | 2026-02-04 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:28.365967 | orchestrator | 2026-02-04 01:11:28 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:28.367642 | orchestrator | 2026-02-04 01:11:28 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:28.367683 | orchestrator | 2026-02-04 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:31.459486 | orchestrator | 2026-02-04 01:11:31 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:31.461165 | orchestrator | 2026-02-04 01:11:31 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:31.461587 | orchestrator | 2026-02-04 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:34.503437 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:34.504001 | orchestrator | 2026-02-04 01:11:34 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:34.504081 | orchestrator | 2026-02-04 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:37.542396 | orchestrator | 2026-02-04 01:11:37 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:37.545267 | orchestrator | 2026-02-04 01:11:37 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:37.545580 | orchestrator | 2026-02-04 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:40.580837 | orchestrator | 2026-02-04 01:11:40 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:40.580893 | orchestrator | 2026-02-04 01:11:40 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:40.580902 | orchestrator | 2026-02-04 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:43.599685 | orchestrator | 2026-02-04 01:11:43 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:43.599888 | orchestrator | 2026-02-04 01:11:43 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:43.599908 | orchestrator | 2026-02-04 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:46.618641 | orchestrator | 2026-02-04 01:11:46 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:46.618937 | orchestrator | 2026-02-04 01:11:46 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:46.618999 | orchestrator | 2026-02-04 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:49.644147 | orchestrator | 2026-02-04 01:11:49 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:49.644962 | orchestrator | 2026-02-04 01:11:49 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:49.645354 | orchestrator | 2026-02-04 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:52.665247 | orchestrator | 2026-02-04 01:11:52 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:52.665413 | orchestrator | 2026-02-04 01:11:52 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:52.665769 | orchestrator | 2026-02-04 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:55.706810 | orchestrator | 2026-02-04 01:11:55 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:55.708688 | orchestrator | 2026-02-04 01:11:55 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:55.708729 | orchestrator | 2026-02-04 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:11:58.749598 | orchestrator | 2026-02-04 01:11:58 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:11:58.750248 | orchestrator | 2026-02-04 01:11:58 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:11:58.750425 | orchestrator | 2026-02-04 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:01.787902 | orchestrator | 2026-02-04 01:12:01 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:01.790372 | orchestrator | 2026-02-04 01:12:01 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:01.790424 | orchestrator | 2026-02-04 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:04.829994 | orchestrator | 2026-02-04 01:12:04 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:04.831548 | orchestrator | 2026-02-04 01:12:04 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:04.831577 | orchestrator | 2026-02-04 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:07.874414 | orchestrator | 2026-02-04 01:12:07 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:07.876148 | orchestrator | 2026-02-04 01:12:07 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:07.876720 | orchestrator | 2026-02-04 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:10.923178 | orchestrator | 2026-02-04 01:12:10 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:10.925217 | orchestrator | 2026-02-04 01:12:10 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:10.925273 | orchestrator | 2026-02-04 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:13.957666 | orchestrator | 2026-02-04 01:12:13 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:13.958253 | orchestrator | 2026-02-04 01:12:13 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:13.958276 | orchestrator | 2026-02-04 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:17.004645 | orchestrator | 2026-02-04 01:12:17 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:17.006389 | orchestrator | 2026-02-04 01:12:17 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:17.006671 | orchestrator | 2026-02-04 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:20.045086 | orchestrator | 2026-02-04 01:12:20 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:20.047068 | orchestrator | 2026-02-04 01:12:20 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:20.047192 | orchestrator | 2026-02-04 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:23.098205 | orchestrator | 2026-02-04 01:12:23 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:23.101129 | orchestrator | 2026-02-04 01:12:23 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:23.101184 | orchestrator | 2026-02-04 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:26.145084 | orchestrator | 2026-02-04 01:12:26 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state STARTED 2026-02-04 01:12:26.148976 | orchestrator | 2026-02-04 01:12:26 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:26.149475 | orchestrator | 2026-02-04 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:29.198224 | orchestrator | 2026-02-04 01:12:29 | INFO  | Task 6e1b9670-5ecc-492e-9a6d-8e4984824e2b is in state SUCCESS 2026-02-04 01:12:29.199877 | orchestrator | 2026-02-04 01:12:29.199928 | orchestrator | 2026-02-04 01:12:29.199933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:12:29.199939 | orchestrator | 2026-02-04 01:12:29.199943 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:12:29.199949 | orchestrator | Wednesday 04 February 2026 01:04:57 +0000 (0:00:00.176) 0:00:00.176 **** 2026-02-04 01:12:29.199954 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.199959 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:12:29.199964 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:12:29.199969 | orchestrator | 2026-02-04 01:12:29.199973 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:12:29.199978 | orchestrator | Wednesday 04 February 2026 01:04:57 +0000 (0:00:00.353) 0:00:00.530 **** 2026-02-04 01:12:29.199982 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-04 01:12:29.200016 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-04 01:12:29.200051 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-04 01:12:29.200056 | orchestrator | 2026-02-04 01:12:29.200060 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-04 01:12:29.200065 | orchestrator | 2026-02-04 01:12:29.200069 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-04 01:12:29.200074 | orchestrator | Wednesday 04 February 2026 01:04:58 +0000 (0:00:00.808) 0:00:01.338 **** 2026-02-04 01:12:29.200078 | orchestrator | 2026-02-04 01:12:29.200083 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-04 01:12:29.200087 | orchestrator | 2026-02-04 01:12:29.200092 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-04 01:12:29.200096 | orchestrator | 2026-02-04 01:12:29.200101 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2026-02-04 01:12:29.200105 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.200110 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:12:29.200114 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:12:29.200118 | orchestrator | 2026-02-04 01:12:29.200122 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:12:29.200126 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:12:29.200131 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:12:29.200135 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:12:29.200139 | orchestrator | 2026-02-04 01:12:29.200143 | orchestrator | 2026-02-04 01:12:29.200146 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:12:29.200151 | orchestrator | Wednesday 04 February 2026 01:08:38 +0000 (0:03:39.839) 0:03:41.178 **** 2026-02-04 01:12:29.200155 | orchestrator | =============================================================================== 2026-02-04 01:12:29.200158 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 219.84s 2026-02-04 01:12:29.200162 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.81s 2026-02-04 01:12:29.200166 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-04 01:12:29.200170 | orchestrator | 2026-02-04 01:12:29.200174 | orchestrator | 2026-02-04 01:12:29.200177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:12:29.200181 | orchestrator | 2026-02-04 01:12:29.200185 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-04 01:12:29.200189 | orchestrator | Wednesday 04 February 2026 01:04:33 +0000 (0:00:00.252) 0:00:00.252 **** 2026-02-04 01:12:29.200193 | orchestrator | changed: [testbed-manager] 2026-02-04 01:12:29.200197 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200201 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.200220 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.200227 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.200233 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.200239 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.200245 | orchestrator | 2026-02-04 01:12:29.200251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:12:29.200256 | orchestrator | Wednesday 04 February 2026 01:04:34 +0000 (0:00:00.799) 0:00:01.052 **** 2026-02-04 01:12:29.200262 | orchestrator | changed: [testbed-manager] 2026-02-04 01:12:29.200269 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200275 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.200280 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.200286 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.200292 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.200298 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.200304 | orchestrator | 2026-02-04 01:12:29.200310 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:12:29.200339 | orchestrator | Wednesday 04 February 2026 01:04:34 +0000 (0:00:00.611) 0:00:01.663 **** 2026-02-04 01:12:29.200345 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-04 01:12:29.200352 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-04 01:12:29.200357 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-04 01:12:29.200360 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-04 01:12:29.200364 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-04 01:12:29.200368 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-04 01:12:29.200403 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-04 01:12:29.200408 | orchestrator | 2026-02-04 01:12:29.200412 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-04 01:12:29.200415 | orchestrator | 2026-02-04 01:12:29.200426 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-04 01:12:29.200430 | orchestrator | Wednesday 04 February 2026 01:04:35 +0000 (0:00:00.724) 0:00:02.388 **** 2026-02-04 01:12:29.200444 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.200448 | orchestrator | 2026-02-04 01:12:29.200452 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-04 01:12:29.200455 | orchestrator | Wednesday 04 February 2026 01:04:36 +0000 (0:00:00.582) 0:00:02.971 **** 2026-02-04 01:12:29.200459 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-04 01:12:29.200463 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-04 01:12:29.200467 | orchestrator | 2026-02-04 01:12:29.200471 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-04 01:12:29.200475 | orchestrator | Wednesday 04 February 2026 01:04:40 +0000 (0:00:04.248) 0:00:07.219 **** 2026-02-04 01:12:29.200478 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:12:29.200482 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:12:29.200486 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200490 | orchestrator | 2026-02-04 01:12:29.200494 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-04 01:12:29.200497 | orchestrator | Wednesday 04 February 2026 01:04:44 +0000 (0:00:04.308) 0:00:11.527 **** 2026-02-04 01:12:29.200501 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200505 | orchestrator | 2026-02-04 01:12:29.200509 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-04 01:12:29.200512 | orchestrator | Wednesday 04 February 2026 01:04:45 +0000 (0:00:00.640) 0:00:12.168 **** 2026-02-04 01:12:29.200516 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200520 | orchestrator | 2026-02-04 01:12:29.200524 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-04 01:12:29.200527 | orchestrator | Wednesday 04 February 2026 01:04:46 +0000 (0:00:01.346) 0:00:13.514 **** 2026-02-04 01:12:29.200535 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200539 | orchestrator | 2026-02-04 01:12:29.200543 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:12:29.200547 | orchestrator | Wednesday 04 February 2026 01:04:49 +0000 (0:00:02.588) 0:00:16.103 **** 2026-02-04 01:12:29.200550 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.200554 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200558 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200562 | orchestrator | 2026-02-04 01:12:29.200565 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-04 01:12:29.200569 | orchestrator | Wednesday 04 February 2026 01:04:49 +0000 (0:00:00.300) 0:00:16.403 **** 2026-02-04 01:12:29.200573 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.200577 | orchestrator | 2026-02-04 01:12:29.200581 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-04 01:12:29.200584 | orchestrator | Wednesday 04 February 2026 01:05:23 +0000 (0:00:33.286) 0:00:49.690 **** 2026-02-04 01:12:29.200588 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200592 | orchestrator | 2026-02-04 01:12:29.200596 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 01:12:29.200599 | orchestrator | Wednesday 04 February 2026 01:05:38 +0000 (0:00:15.823) 0:01:05.513 **** 2026-02-04 01:12:29.200603 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.200607 | orchestrator | 2026-02-04 01:12:29.200611 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 01:12:29.200614 | orchestrator | Wednesday 04 February 2026 01:05:52 +0000 (0:00:13.314) 0:01:18.828 **** 2026-02-04 01:12:29.200618 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.200630 | orchestrator | 2026-02-04 01:12:29.200634 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-04 01:12:29.200656 | orchestrator | Wednesday 04 February 2026 01:05:53 +0000 (0:00:01.165) 0:01:19.993 **** 2026-02-04 01:12:29.200672 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.200677 | orchestrator | 2026-02-04 01:12:29.200680 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:12:29.200684 | orchestrator | Wednesday 04 February 2026 01:05:53 +0000 (0:00:00.449) 0:01:20.443 **** 2026-02-04 01:12:29.200688 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.200692 | orchestrator | 2026-02-04 01:12:29.200696 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-04 01:12:29.200700 | orchestrator | Wednesday 04 February 2026 01:05:54 +0000 (0:00:00.492) 0:01:20.935 **** 2026-02-04 01:12:29.200703 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.200707 | orchestrator | 2026-02-04 01:12:29.200711 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-04 01:12:29.200715 | orchestrator | Wednesday 04 February 2026 01:06:14 +0000 (0:00:20.317) 0:01:41.252 **** 2026-02-04 01:12:29.200718 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.200722 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200726 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200730 | orchestrator | 2026-02-04 01:12:29.200733 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-04 01:12:29.200737 | orchestrator | 2026-02-04 01:12:29.200741 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-04 01:12:29.200745 | orchestrator | Wednesday 04 February 2026 01:06:14 +0000 (0:00:00.326) 0:01:41.579 **** 2026-02-04 01:12:29.200748 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.200752 | orchestrator | 2026-02-04 01:12:29.200756 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-04 01:12:29.200760 | orchestrator | Wednesday 04 February 2026 01:06:15 +0000 (0:00:00.612) 0:01:42.192 **** 2026-02-04 01:12:29.200763 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200770 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200774 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200782 | orchestrator | 2026-02-04 01:12:29.200786 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-04 01:12:29.200793 | orchestrator | Wednesday 04 February 2026 01:06:17 +0000 (0:00:01.876) 0:01:44.068 **** 2026-02-04 01:12:29.200797 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200801 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200814 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200818 | orchestrator | 2026-02-04 01:12:29.200822 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-04 01:12:29.200825 | orchestrator | Wednesday 04 February 2026 01:06:19 +0000 (0:00:01.886) 0:01:45.955 **** 2026-02-04 01:12:29.200829 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.200833 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200837 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200844 | orchestrator | 2026-02-04 01:12:29.200848 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-04 01:12:29.200852 | orchestrator | Wednesday 04 February 2026 01:06:19 +0000 (0:00:00.277) 0:01:46.233 **** 2026-02-04 01:12:29.200856 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 01:12:29.200860 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200863 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 01:12:29.200867 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200871 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 01:12:29.200880 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-04 01:12:29.200884 | orchestrator | 2026-02-04 01:12:29.200888 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-04 01:12:29.200891 | orchestrator | Wednesday 04 February 2026 01:06:26 +0000 (0:00:07.158) 0:01:53.391 **** 2026-02-04 01:12:29.200895 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.200899 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200903 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200907 | orchestrator | 2026-02-04 01:12:29.200910 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-04 01:12:29.200914 | orchestrator | Wednesday 04 February 2026 01:06:27 +0000 (0:00:00.319) 0:01:53.711 **** 2026-02-04 01:12:29.200918 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 01:12:29.200922 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.200926 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 01:12:29.200929 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200933 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 01:12:29.200937 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200941 | orchestrator | 2026-02-04 01:12:29.200945 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-04 01:12:29.200948 | orchestrator | Wednesday 04 February 2026 01:06:27 +0000 (0:00:00.651) 0:01:54.363 **** 2026-02-04 01:12:29.200952 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200956 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200960 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200964 | orchestrator | 2026-02-04 01:12:29.200967 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-04 01:12:29.200971 | orchestrator | Wednesday 04 February 2026 01:06:28 +0000 (0:00:00.624) 0:01:54.988 **** 2026-02-04 01:12:29.200975 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.200979 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.200982 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.200986 | orchestrator | 2026-02-04 01:12:29.200990 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-04 01:12:29.200994 | orchestrator | Wednesday 04 February 2026 01:06:29 +0000 (0:00:00.898) 0:01:55.886 **** 2026-02-04 01:12:29.201001 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201004 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201008 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.201012 | orchestrator | 2026-02-04 01:12:29.201016 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-04 01:12:29.201031 | orchestrator | Wednesday 04 February 2026 01:06:31 +0000 (0:00:01.831) 0:01:57.717 **** 2026-02-04 01:12:29.201036 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201042 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201048 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.201055 | orchestrator | 2026-02-04 01:12:29.201061 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 01:12:29.201067 | orchestrator | Wednesday 04 February 2026 01:07:00 +0000 (0:00:29.536) 0:02:27.253 **** 2026-02-04 01:12:29.201073 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201079 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201085 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.201092 | orchestrator | 2026-02-04 01:12:29.201098 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 01:12:29.201105 | orchestrator | Wednesday 04 February 2026 01:07:14 +0000 (0:00:14.081) 0:02:41.334 **** 2026-02-04 01:12:29.201111 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.201117 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201124 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201128 | orchestrator | 2026-02-04 01:12:29.201131 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-04 01:12:29.201135 | orchestrator | Wednesday 04 February 2026 01:07:15 +0000 (0:00:01.053) 0:02:42.388 **** 2026-02-04 01:12:29.201139 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201143 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201147 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.201150 | orchestrator | 2026-02-04 01:12:29.201154 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-04 01:12:29.201158 | orchestrator | Wednesday 04 February 2026 01:07:27 +0000 (0:00:12.261) 0:02:54.650 **** 2026-02-04 01:12:29.201162 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.201166 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201171 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201185 | orchestrator | 2026-02-04 01:12:29.201192 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-04 01:12:29.201198 | orchestrator | Wednesday 04 February 2026 01:07:28 +0000 (0:00:00.946) 0:02:55.597 **** 2026-02-04 01:12:29.201204 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.201210 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201216 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201223 | orchestrator | 2026-02-04 01:12:29.201233 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-04 01:12:29.201239 | orchestrator | 2026-02-04 01:12:29.201277 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:12:29.201283 | orchestrator | Wednesday 04 February 2026 01:07:29 +0000 (0:00:00.400) 0:02:55.998 **** 2026-02-04 01:12:29.201287 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.201291 | orchestrator | 2026-02-04 01:12:29.201294 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-04 01:12:29.201298 | orchestrator | Wednesday 04 February 2026 01:07:29 +0000 (0:00:00.482) 0:02:56.480 **** 2026-02-04 01:12:29.201328 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-04 01:12:29.201334 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-04 01:12:29.201340 | orchestrator | 2026-02-04 01:12:29.201346 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-04 01:12:29.201351 | orchestrator | Wednesday 04 February 2026 01:07:33 +0000 (0:00:03.445) 0:02:59.925 **** 2026-02-04 01:12:29.201363 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-04 01:12:29.201370 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-04 01:12:29.201376 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-04 01:12:29.201383 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-04 01:12:29.201390 | orchestrator | 2026-02-04 01:12:29.201396 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-04 01:12:29.201402 | orchestrator | Wednesday 04 February 2026 01:07:39 +0000 (0:00:06.699) 0:03:06.625 **** 2026-02-04 01:12:29.201409 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:12:29.201415 | orchestrator | 2026-02-04 01:12:29.201421 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-04 01:12:29.201428 | orchestrator | Wednesday 04 February 2026 01:07:43 +0000 (0:00:03.350) 0:03:09.975 **** 2026-02-04 01:12:29.201434 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:12:29.201440 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-04 01:12:29.201446 | orchestrator | 2026-02-04 01:12:29.201452 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-04 01:12:29.201459 | orchestrator | Wednesday 04 February 2026 01:07:47 +0000 (0:00:03.926) 0:03:13.902 **** 2026-02-04 01:12:29.201465 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:12:29.201471 | orchestrator | 2026-02-04 01:12:29.201477 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-04 01:12:29.201484 | orchestrator | Wednesday 04 February 2026 01:07:50 +0000 (0:00:03.436) 0:03:17.338 **** 2026-02-04 01:12:29.201490 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-04 01:12:29.201496 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-04 01:12:29.201502 | orchestrator | 2026-02-04 01:12:29.201509 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-04 01:12:29.201515 | orchestrator | Wednesday 04 February 2026 01:07:58 +0000 (0:00:07.728) 0:03:25.067 **** 2026-02-04 01:12:29.201524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.201544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.201554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.201559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.201564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.201568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.201572 | orchestrator | 2026-02-04 01:12:29.201576 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-04 01:12:29.201582 | orchestrator | Wednesday 04 February 2026 01:07:59 +0000 (0:00:01.197) 0:03:26.264 **** 2026-02-04 01:12:29.201586 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.201590 | orchestrator | 2026-02-04 01:12:29.201594 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-04 01:12:29.201600 | orchestrator | Wednesday 04 February 2026 01:07:59 +0000 (0:00:00.122) 0:03:26.387 **** 2026-02-04 01:12:29.201604 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.201608 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201612 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201616 | orchestrator | 2026-02-04 01:12:29.201620 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-04 01:12:29.201624 | orchestrator | Wednesday 04 February 2026 01:07:59 +0000 (0:00:00.283) 0:03:26.670 **** 2026-02-04 01:12:29.201627 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:12:29.201631 | orchestrator | 2026-02-04 01:12:29.201635 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-04 01:12:29.201639 | orchestrator | Wednesday 04 February 2026 01:08:00 +0000 (0:00:00.850) 0:03:27.521 **** 2026-02-04 01:12:29.201643 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.201646 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201650 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.201654 | orchestrator | 2026-02-04 01:12:29.201658 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 01:12:29.201662 | orchestrator | Wednesday 04 February 2026 01:08:01 +0000 (0:00:00.287) 0:03:27.808 **** 2026-02-04 01:12:29.201665 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.201669 | orchestrator | 2026-02-04 01:12:29.201673 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-04 01:12:29.201677 | orchestrator | Wednesday 04 February 2026 01:08:01 +0000 (0:00:00.528) 0:03:28.337 **** 2026-02-04 01:12:29.201700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.201706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.201720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.201724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.201738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.201742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.201746 | orchestrator | 2026-02-04 01:12:29.201750 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-04 01:12:29.201754 | orchestrator | Wednesday 04 February 2026 01:08:04 +0000 (0:00:02.503) 0:03:30.841 **** 2026-02-04 01:12:29.201758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.201771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.201775 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.201780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.201784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.201788 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.201792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202092 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.202096 | orchestrator | 2026-02-04 01:12:29.202100 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-04 01:12:29.202104 | orchestrator | Wednesday 04 February 2026 01:08:04 +0000 (0:00:00.672) 0:03:31.514 **** 2026-02-04 01:12:29.202108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202116 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.202120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202135 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.202143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202152 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.202156 | orchestrator | 2026-02-04 01:12:29.202160 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-04 01:12:29.202164 | orchestrator | Wednesday 04 February 2026 01:08:05 +0000 (0:00:00.688) 0:03:32.203 **** 2026-02-04 01:12:29.202168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202204 | orchestrator | 2026-02-04 01:12:29.202208 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-04 01:12:29.202212 | orchestrator | Wednesday 04 February 2026 01:08:07 +0000 (0:00:02.290) 0:03:34.493 **** 2026-02-04 01:12:29.202221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202254 | orchestrator | 2026-02-04 01:12:29.202258 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-04 01:12:29.202262 | orchestrator | Wednesday 04 February 2026 01:08:13 +0000 (0:00:05.212) 0:03:39.706 **** 2026-02-04 01:12:29.202266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202278 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.202286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202299 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.202306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 01:12:29.202318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.202324 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.202330 | orchestrator | 2026-02-04 01:12:29.202336 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-04 01:12:29.202342 | orchestrator | Wednesday 04 February 2026 01:08:13 +0000 (0:00:00.569) 0:03:40.275 **** 2026-02-04 01:12:29.202348 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.202353 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.202359 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.202364 | orchestrator | 2026-02-04 01:12:29.202369 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-04 01:12:29.202375 | orchestrator | Wednesday 04 February 2026 01:08:15 +0000 (0:00:01.653) 0:03:41.929 **** 2026-02-04 01:12:29.202380 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.202386 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.202392 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.202398 | orchestrator | 2026-02-04 01:12:29.202403 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-04 01:12:29.202409 | orchestrator | Wednesday 04 February 2026 01:08:15 +0000 (0:00:00.317) 0:03:42.246 **** 2026-02-04 01:12:29.202424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 01:12:29.202461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.202491 | orchestrator | 2026-02-04 01:12:29.202497 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 01:12:29.202503 | orchestrator | Wednesday 04 February 2026 01:08:17 +0000 (0:00:01.972) 0:03:44.219 **** 2026-02-04 01:12:29.202508 | orchestrator | 2026-02-04 01:12:29.202514 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 01:12:29.202520 | orchestrator | Wednesday 04 February 2026 01:08:17 +0000 (0:00:00.132) 0:03:44.351 **** 2026-02-04 01:12:29.202526 | orchestrator | 2026-02-04 01:12:29.202532 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 01:12:29.202538 | orchestrator | Wednesday 04 February 2026 01:08:17 +0000 (0:00:00.128) 0:03:44.480 **** 2026-02-04 01:12:29.202543 | orchestrator | 2026-02-04 01:12:29.202549 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-04 01:12:29.202555 | orchestrator | Wednesday 04 February 2026 01:08:17 +0000 (0:00:00.127) 0:03:44.607 **** 2026-02-04 01:12:29.202560 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.202565 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.202571 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.202576 | orchestrator | 2026-02-04 01:12:29.202581 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-04 01:12:29.202587 | orchestrator | Wednesday 04 February 2026 01:08:31 +0000 (0:00:13.296) 0:03:57.904 **** 2026-02-04 01:12:29.202592 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.202598 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.202603 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.202609 | orchestrator | 2026-02-04 01:12:29.202615 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-04 01:12:29.202620 | orchestrator | 2026-02-04 01:12:29.202625 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:12:29.202631 | orchestrator | Wednesday 04 February 2026 01:08:36 +0000 (0:00:04.989) 0:04:02.893 **** 2026-02-04 01:12:29.202637 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.202643 | orchestrator | 2026-02-04 01:12:29.202649 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:12:29.202656 | orchestrator | Wednesday 04 February 2026 01:08:37 +0000 (0:00:01.149) 0:04:04.043 **** 2026-02-04 01:12:29.202661 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.202667 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.202672 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.202678 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.202684 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.202690 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.202696 | orchestrator | 2026-02-04 01:12:29.202702 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-04 01:12:29.202708 | orchestrator | Wednesday 04 February 2026 01:08:37 +0000 (0:00:00.559) 0:04:04.602 **** 2026-02-04 01:12:29.202713 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.202720 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.202725 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.202731 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:12:29.202737 | orchestrator | 2026-02-04 01:12:29.202780 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 01:12:29.202789 | orchestrator | Wednesday 04 February 2026 01:08:38 +0000 (0:00:01.006) 0:04:05.609 **** 2026-02-04 01:12:29.202796 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-04 01:12:29.202802 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-04 01:12:29.202808 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-04 01:12:29.202814 | orchestrator | 2026-02-04 01:12:29.202820 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 01:12:29.202833 | orchestrator | Wednesday 04 February 2026 01:08:39 +0000 (0:00:00.724) 0:04:06.333 **** 2026-02-04 01:12:29.202840 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-04 01:12:29.202846 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-04 01:12:29.202852 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-04 01:12:29.202858 | orchestrator | 2026-02-04 01:12:29.202865 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 01:12:29.202870 | orchestrator | Wednesday 04 February 2026 01:08:40 +0000 (0:00:01.127) 0:04:07.461 **** 2026-02-04 01:12:29.202899 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-04 01:12:29.202905 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.202912 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-04 01:12:29.202921 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.202928 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-04 01:12:29.202933 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.202939 | orchestrator | 2026-02-04 01:12:29.202951 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-04 01:12:29.202957 | orchestrator | Wednesday 04 February 2026 01:08:41 +0000 (0:00:00.713) 0:04:08.175 **** 2026-02-04 01:12:29.202963 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:12:29.202968 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:12:29.202973 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.202979 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:12:29.202985 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:12:29.202990 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.202995 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:12:29.203000 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:12:29.203006 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.203011 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 01:12:29.203017 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 01:12:29.203035 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 01:12:29.203041 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 01:12:29.203046 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 01:12:29.203060 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 01:12:29.203066 | orchestrator | 2026-02-04 01:12:29.203071 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-04 01:12:29.203077 | orchestrator | Wednesday 04 February 2026 01:08:43 +0000 (0:00:01.940) 0:04:10.115 **** 2026-02-04 01:12:29.203082 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.203088 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.203094 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.203099 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.203105 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.203110 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.203116 | orchestrator | 2026-02-04 01:12:29.203122 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-04 01:12:29.203128 | orchestrator | Wednesday 04 February 2026 01:08:44 +0000 (0:00:01.324) 0:04:11.440 **** 2026-02-04 01:12:29.203133 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.203139 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.203145 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.203150 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.203160 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.203166 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.203171 | orchestrator | 2026-02-04 01:12:29.203177 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-04 01:12:29.203182 | orchestrator | Wednesday 04 February 2026 01:08:46 +0000 (0:00:01.607) 0:04:13.047 **** 2026-02-04 01:12:29.203189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203213 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203236 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203248 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203299 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203319 | orchestrator | 2026-02-04 01:12:29.203325 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:12:29.203331 | orchestrator | Wednesday 04 February 2026 01:08:48 +0000 (0:00:02.106) 0:04:15.153 **** 2026-02-04 01:12:29.203337 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:12:29.203344 | orchestrator | 2026-02-04 01:12:29.203349 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-04 01:12:29.203355 | orchestrator | Wednesday 04 February 2026 01:08:49 +0000 (0:00:01.016) 0:04:16.170 **** 2026-02-04 01:12:29.203361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203371 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203451 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.203483 | orchestrator | 2026-02-04 01:12:29.203489 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-04 01:12:29.203495 | orchestrator | Wednesday 04 February 2026 01:08:52 +0000 (0:00:03.325) 0:04:19.496 **** 2026-02-04 01:12:29.203507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.203514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.203525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.203538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.203545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203552 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.203558 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.203572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.203583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.203590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203597 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.203604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.203611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203618 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.203624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.203792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203815 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.203822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.203829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203835 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.203841 | orchestrator | 2026-02-04 01:12:29.203848 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-04 01:12:29.203854 | orchestrator | Wednesday 04 February 2026 01:08:54 +0000 (0:00:01.342) 0:04:20.839 **** 2026-02-04 01:12:29.203861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.203868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.203882 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203894 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.203901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.203908 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.203914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.203921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.203927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203944 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203950 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.203957 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.203963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.203969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203976 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.203983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.203989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.203996 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.204002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.204049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.204057 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.204064 | orchestrator | 2026-02-04 01:12:29.204071 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:12:29.204077 | orchestrator | Wednesday 04 February 2026 01:08:56 +0000 (0:00:01.864) 0:04:22.704 **** 2026-02-04 01:12:29.204083 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.204090 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.204095 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.204102 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:12:29.204109 | orchestrator | 2026-02-04 01:12:29.204115 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-04 01:12:29.204122 | orchestrator | Wednesday 04 February 2026 01:08:56 +0000 (0:00:00.874) 0:04:23.578 **** 2026-02-04 01:12:29.204128 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:12:29.204134 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:12:29.204140 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:12:29.204147 | orchestrator | 2026-02-04 01:12:29.204153 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-04 01:12:29.204159 | orchestrator | Wednesday 04 February 2026 01:08:57 +0000 (0:00:00.812) 0:04:24.391 **** 2026-02-04 01:12:29.204166 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:12:29.204172 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:12:29.204178 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:12:29.204184 | orchestrator | 2026-02-04 01:12:29.204190 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-04 01:12:29.204197 | orchestrator | Wednesday 04 February 2026 01:08:58 +0000 (0:00:00.791) 0:04:25.183 **** 2026-02-04 01:12:29.204204 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:12:29.204210 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:12:29.204216 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:12:29.204223 | orchestrator | 2026-02-04 01:12:29.204229 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-04 01:12:29.204235 | orchestrator | Wednesday 04 February 2026 01:08:58 +0000 (0:00:00.473) 0:04:25.657 **** 2026-02-04 01:12:29.204241 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:12:29.204247 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:12:29.204253 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:12:29.204259 | orchestrator | 2026-02-04 01:12:29.204265 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-04 01:12:29.204271 | orchestrator | Wednesday 04 February 2026 01:08:59 +0000 (0:00:00.618) 0:04:26.275 **** 2026-02-04 01:12:29.204278 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 01:12:29.204284 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 01:12:29.204290 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 01:12:29.204296 | orchestrator | 2026-02-04 01:12:29.204303 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-04 01:12:29.204309 | orchestrator | Wednesday 04 February 2026 01:09:00 +0000 (0:00:01.035) 0:04:27.310 **** 2026-02-04 01:12:29.204323 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 01:12:29.204329 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 01:12:29.204335 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 01:12:29.204341 | orchestrator | 2026-02-04 01:12:29.204348 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-04 01:12:29.204354 | orchestrator | Wednesday 04 February 2026 01:09:01 +0000 (0:00:00.996) 0:04:28.307 **** 2026-02-04 01:12:29.204360 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 01:12:29.204367 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 01:12:29.204373 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 01:12:29.204454 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-04 01:12:29.204460 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-04 01:12:29.204464 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-04 01:12:29.204469 | orchestrator | 2026-02-04 01:12:29.204473 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-04 01:12:29.204478 | orchestrator | Wednesday 04 February 2026 01:09:05 +0000 (0:00:03.576) 0:04:31.884 **** 2026-02-04 01:12:29.204482 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.204487 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.204491 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.204495 | orchestrator | 2026-02-04 01:12:29.204499 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-04 01:12:29.204504 | orchestrator | Wednesday 04 February 2026 01:09:05 +0000 (0:00:00.507) 0:04:32.391 **** 2026-02-04 01:12:29.204508 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.204513 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.204517 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.204522 | orchestrator | 2026-02-04 01:12:29.204526 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-04 01:12:29.204530 | orchestrator | Wednesday 04 February 2026 01:09:06 +0000 (0:00:00.303) 0:04:32.694 **** 2026-02-04 01:12:29.204535 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.204539 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.204544 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.204548 | orchestrator | 2026-02-04 01:12:29.204552 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-04 01:12:29.204560 | orchestrator | Wednesday 04 February 2026 01:09:07 +0000 (0:00:01.172) 0:04:33.867 **** 2026-02-04 01:12:29.204569 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 01:12:29.204574 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 01:12:29.204579 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 01:12:29.204583 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 01:12:29.204588 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 01:12:29.204618 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 01:12:29.204623 | orchestrator | 2026-02-04 01:12:29.204628 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-04 01:12:29.204632 | orchestrator | Wednesday 04 February 2026 01:09:10 +0000 (0:00:03.021) 0:04:36.888 **** 2026-02-04 01:12:29.204637 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:12:29.204654 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:12:29.204662 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:12:29.204666 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:12:29.204671 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.204675 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:12:29.204680 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.204684 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:12:29.204689 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.204693 | orchestrator | 2026-02-04 01:12:29.204697 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-04 01:12:29.204702 | orchestrator | Wednesday 04 February 2026 01:09:13 +0000 (0:00:02.999) 0:04:39.888 **** 2026-02-04 01:12:29.204706 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.204711 | orchestrator | 2026-02-04 01:12:29.204715 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-04 01:12:29.204720 | orchestrator | Wednesday 04 February 2026 01:09:13 +0000 (0:00:00.143) 0:04:40.031 **** 2026-02-04 01:12:29.204724 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.204728 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.204733 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.204737 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.204742 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.204746 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.204750 | orchestrator | 2026-02-04 01:12:29.204754 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-04 01:12:29.204759 | orchestrator | Wednesday 04 February 2026 01:09:13 +0000 (0:00:00.560) 0:04:40.592 **** 2026-02-04 01:12:29.204763 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:12:29.204768 | orchestrator | 2026-02-04 01:12:29.204772 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-04 01:12:29.204777 | orchestrator | Wednesday 04 February 2026 01:09:14 +0000 (0:00:00.677) 0:04:41.269 **** 2026-02-04 01:12:29.204782 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.204789 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.204794 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.204800 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.204807 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.204814 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.204820 | orchestrator | 2026-02-04 01:12:29.204826 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-04 01:12:29.204833 | orchestrator | Wednesday 04 February 2026 01:09:15 +0000 (0:00:00.728) 0:04:41.998 **** 2026-02-04 01:12:29.204838 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204873 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204906 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204922 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.204926 | orchestrator | 2026-02-04 01:12:29.204930 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-04 01:12:29.204934 | orchestrator | Wednesday 04 February 2026 01:09:18 +0000 (0:00:03.598) 0:04:45.597 **** 2026-02-04 01:12:29.204938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.204942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.204946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.204951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.205149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.205165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.205172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205185 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205207 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205243 | orchestrator | 2026-02-04 01:12:29.205247 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-04 01:12:29.205251 | orchestrator | Wednesday 04 February 2026 01:09:24 +0000 (0:00:05.938) 0:04:51.535 **** 2026-02-04 01:12:29.205255 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.205259 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.205263 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.205266 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205270 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205274 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205277 | orchestrator | 2026-02-04 01:12:29.205281 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-04 01:12:29.205285 | orchestrator | Wednesday 04 February 2026 01:09:26 +0000 (0:00:01.228) 0:04:52.764 **** 2026-02-04 01:12:29.205289 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 01:12:29.205295 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 01:12:29.205301 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 01:12:29.205305 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 01:12:29.205309 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 01:12:29.205313 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 01:12:29.205317 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205321 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 01:12:29.205325 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 01:12:29.205328 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205332 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 01:12:29.205336 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205340 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 01:12:29.205344 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 01:12:29.205348 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 01:12:29.205351 | orchestrator | 2026-02-04 01:12:29.205355 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-04 01:12:29.205359 | orchestrator | Wednesday 04 February 2026 01:09:29 +0000 (0:00:03.130) 0:04:55.894 **** 2026-02-04 01:12:29.205363 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.205367 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.205370 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.205374 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205378 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205382 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205385 | orchestrator | 2026-02-04 01:12:29.205389 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-04 01:12:29.205393 | orchestrator | Wednesday 04 February 2026 01:09:29 +0000 (0:00:00.553) 0:04:56.447 **** 2026-02-04 01:12:29.205397 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 01:12:29.205401 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 01:12:29.205405 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 01:12:29.205413 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 01:12:29.205417 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 01:12:29.205421 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 01:12:29.205425 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 01:12:29.205429 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 01:12:29.205433 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 01:12:29.205437 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 01:12:29.205440 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205444 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 01:12:29.205448 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205452 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 01:12:29.205455 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205459 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:12:29.205463 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:12:29.205467 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:12:29.205471 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:12:29.205474 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:12:29.205478 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 01:12:29.205482 | orchestrator | 2026-02-04 01:12:29.205486 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-04 01:12:29.205492 | orchestrator | Wednesday 04 February 2026 01:09:34 +0000 (0:00:04.933) 0:05:01.381 **** 2026-02-04 01:12:29.205498 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 01:12:29.205502 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 01:12:29.205505 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 01:12:29.205509 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:12:29.205513 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 01:12:29.205517 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 01:12:29.205521 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:12:29.205524 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 01:12:29.205528 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 01:12:29.205532 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 01:12:29.205536 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 01:12:29.205539 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 01:12:29.205546 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205550 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 01:12:29.205553 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 01:12:29.205557 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205561 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:12:29.205565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 01:12:29.205568 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205572 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:12:29.205576 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 01:12:29.205580 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:12:29.205584 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:12:29.205587 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 01:12:29.205591 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:12:29.205595 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:12:29.205599 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 01:12:29.205603 | orchestrator | 2026-02-04 01:12:29.205606 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-04 01:12:29.205610 | orchestrator | Wednesday 04 February 2026 01:09:40 +0000 (0:00:06.182) 0:05:07.564 **** 2026-02-04 01:12:29.205614 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.205618 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.205621 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.205625 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205629 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205633 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205637 | orchestrator | 2026-02-04 01:12:29.205640 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-04 01:12:29.205644 | orchestrator | Wednesday 04 February 2026 01:09:41 +0000 (0:00:00.607) 0:05:08.171 **** 2026-02-04 01:12:29.205648 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.205652 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.205655 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.205659 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205667 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205671 | orchestrator | 2026-02-04 01:12:29.205674 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-04 01:12:29.205678 | orchestrator | Wednesday 04 February 2026 01:09:42 +0000 (0:00:00.567) 0:05:08.738 **** 2026-02-04 01:12:29.205682 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.205686 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205689 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205693 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205697 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.205701 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.205704 | orchestrator | 2026-02-04 01:12:29.205708 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-04 01:12:29.205712 | orchestrator | Wednesday 04 February 2026 01:09:44 +0000 (0:00:02.039) 0:05:10.778 **** 2026-02-04 01:12:29.205722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.205729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.205733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.205737 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.205741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.205745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.205751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.205759 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.205763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 01:12:29.205767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 01:12:29.205772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.205777 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.205781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.205786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.205793 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.205806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.205811 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 01:12:29.205820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 01:12:29.205824 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205829 | orchestrator | 2026-02-04 01:12:29.205833 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-04 01:12:29.205838 | orchestrator | Wednesday 04 February 2026 01:09:45 +0000 (0:00:01.324) 0:05:12.102 **** 2026-02-04 01:12:29.205842 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-04 01:12:29.205847 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-04 01:12:29.205851 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.205856 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-04 01:12:29.205860 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-04 01:12:29.205865 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.205869 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-04 01:12:29.205873 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-04 01:12:29.205878 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.205882 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-04 01:12:29.205886 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-04 01:12:29.205893 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.205897 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-04 01:12:29.205902 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-04 01:12:29.205906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.205910 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-04 01:12:29.205915 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-04 01:12:29.205919 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.205924 | orchestrator | 2026-02-04 01:12:29.205928 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-04 01:12:29.205932 | orchestrator | Wednesday 04 February 2026 01:09:46 +0000 (0:00:00.831) 0:05:12.933 **** 2026-02-04 01:12:29.205941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205951 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205956 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 01:12:29.205996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.206003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.206009 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.206062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.206093 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 01:12:29.206098 | orchestrator | 2026-02-04 01:12:29.206103 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 01:12:29.206107 | orchestrator | Wednesday 04 February 2026 01:09:49 +0000 (0:00:02.932) 0:05:15.866 **** 2026-02-04 01:12:29.206112 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.206116 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.206121 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.206125 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.206135 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.206139 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.206143 | orchestrator | 2026-02-04 01:12:29.206148 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:12:29.206159 | orchestrator | Wednesday 04 February 2026 01:09:49 +0000 (0:00:00.721) 0:05:16.588 **** 2026-02-04 01:12:29.206164 | orchestrator | 2026-02-04 01:12:29.206168 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:12:29.206175 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:00.131) 0:05:16.719 **** 2026-02-04 01:12:29.206181 | orchestrator | 2026-02-04 01:12:29.206187 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:12:29.206194 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:00.128) 0:05:16.847 **** 2026-02-04 01:12:29.206200 | orchestrator | 2026-02-04 01:12:29.206207 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:12:29.206213 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:00.153) 0:05:17.000 **** 2026-02-04 01:12:29.206220 | orchestrator | 2026-02-04 01:12:29.206227 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:12:29.206233 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:00.127) 0:05:17.128 **** 2026-02-04 01:12:29.206239 | orchestrator | 2026-02-04 01:12:29.206243 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 01:12:29.206246 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:00.129) 0:05:17.257 **** 2026-02-04 01:12:29.206250 | orchestrator | 2026-02-04 01:12:29.206254 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-04 01:12:29.206258 | orchestrator | Wednesday 04 February 2026 01:09:50 +0000 (0:00:00.283) 0:05:17.541 **** 2026-02-04 01:12:29.206262 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.206265 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.206269 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.206273 | orchestrator | 2026-02-04 01:12:29.206277 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-04 01:12:29.206281 | orchestrator | Wednesday 04 February 2026 01:10:02 +0000 (0:00:11.676) 0:05:29.217 **** 2026-02-04 01:12:29.206284 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.206288 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.206292 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.206296 | orchestrator | 2026-02-04 01:12:29.206299 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-04 01:12:29.206303 | orchestrator | Wednesday 04 February 2026 01:10:15 +0000 (0:00:12.469) 0:05:41.687 **** 2026-02-04 01:12:29.206307 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.206311 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.206315 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.206318 | orchestrator | 2026-02-04 01:12:29.206322 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-04 01:12:29.206326 | orchestrator | Wednesday 04 February 2026 01:10:30 +0000 (0:00:15.197) 0:05:56.884 **** 2026-02-04 01:12:29.206330 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.206334 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.206340 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.206344 | orchestrator | 2026-02-04 01:12:29.206348 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-04 01:12:29.206355 | orchestrator | Wednesday 04 February 2026 01:10:59 +0000 (0:00:29.237) 0:06:26.122 **** 2026-02-04 01:12:29.206361 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.206367 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.206372 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.206378 | orchestrator | 2026-02-04 01:12:29.206384 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-04 01:12:29.206389 | orchestrator | Wednesday 04 February 2026 01:11:00 +0000 (0:00:00.782) 0:06:26.905 **** 2026-02-04 01:12:29.206399 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.206404 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.206410 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.206416 | orchestrator | 2026-02-04 01:12:29.206422 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-04 01:12:29.206428 | orchestrator | Wednesday 04 February 2026 01:11:01 +0000 (0:00:00.776) 0:06:27.681 **** 2026-02-04 01:12:29.206434 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:12:29.206440 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:12:29.206446 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:12:29.206453 | orchestrator | 2026-02-04 01:12:29.206459 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-04 01:12:29.206465 | orchestrator | Wednesday 04 February 2026 01:11:18 +0000 (0:00:17.054) 0:06:44.736 **** 2026-02-04 01:12:29.206471 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.206496 | orchestrator | 2026-02-04 01:12:29.206504 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-04 01:12:29.206510 | orchestrator | Wednesday 04 February 2026 01:11:18 +0000 (0:00:00.116) 0:06:44.853 **** 2026-02-04 01:12:29.206516 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.206521 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.206528 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.206534 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.206540 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.206546 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-04 01:12:29.206553 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:12:29.206559 | orchestrator | 2026-02-04 01:12:29.206565 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-04 01:12:29.206571 | orchestrator | Wednesday 04 February 2026 01:11:39 +0000 (0:00:21.229) 0:07:06.082 **** 2026-02-04 01:12:29.206577 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.206583 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.206589 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.206595 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.206602 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.206608 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.206614 | orchestrator | 2026-02-04 01:12:29.206620 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-04 01:12:29.206628 | orchestrator | Wednesday 04 February 2026 01:11:49 +0000 (0:00:10.198) 0:07:16.281 **** 2026-02-04 01:12:29.206634 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.206643 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.206650 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.206656 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.206662 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.206668 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-04 01:12:29.206674 | orchestrator | 2026-02-04 01:12:29.206679 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 01:12:29.206685 | orchestrator | Wednesday 04 February 2026 01:11:53 +0000 (0:00:03.725) 0:07:20.006 **** 2026-02-04 01:12:29.206690 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:12:29.206696 | orchestrator | 2026-02-04 01:12:29.206702 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 01:12:29.206708 | orchestrator | Wednesday 04 February 2026 01:12:06 +0000 (0:00:13.522) 0:07:33.529 **** 2026-02-04 01:12:29.206714 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:12:29.206721 | orchestrator | 2026-02-04 01:12:29.206727 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-04 01:12:29.206733 | orchestrator | Wednesday 04 February 2026 01:12:08 +0000 (0:00:01.247) 0:07:34.776 **** 2026-02-04 01:12:29.206745 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.206749 | orchestrator | 2026-02-04 01:12:29.206753 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-04 01:12:29.206757 | orchestrator | Wednesday 04 February 2026 01:12:09 +0000 (0:00:01.226) 0:07:36.003 **** 2026-02-04 01:12:29.206760 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-04 01:12:29.206764 | orchestrator | 2026-02-04 01:12:29.206768 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-04 01:12:29.206772 | orchestrator | Wednesday 04 February 2026 01:12:20 +0000 (0:00:11.499) 0:07:47.502 **** 2026-02-04 01:12:29.206776 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:12:29.206780 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:12:29.206784 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:12:29.206787 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:12:29.206791 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:12:29.206795 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:12:29.206799 | orchestrator | 2026-02-04 01:12:29.206803 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-04 01:12:29.206807 | orchestrator | 2026-02-04 01:12:29.206810 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-04 01:12:29.206814 | orchestrator | Wednesday 04 February 2026 01:12:22 +0000 (0:00:01.661) 0:07:49.164 **** 2026-02-04 01:12:29.206818 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:12:29.206822 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:12:29.206826 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:12:29.206830 | orchestrator | 2026-02-04 01:12:29.206837 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-04 01:12:29.206841 | orchestrator | 2026-02-04 01:12:29.206845 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-04 01:12:29.206853 | orchestrator | Wednesday 04 February 2026 01:12:23 +0000 (0:00:01.002) 0:07:50.167 **** 2026-02-04 01:12:29.206857 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.206861 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.206865 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.206868 | orchestrator | 2026-02-04 01:12:29.206872 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-04 01:12:29.206876 | orchestrator | 2026-02-04 01:12:29.206880 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-04 01:12:29.206884 | orchestrator | Wednesday 04 February 2026 01:12:23 +0000 (0:00:00.490) 0:07:50.658 **** 2026-02-04 01:12:29.206887 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-04 01:12:29.206891 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-04 01:12:29.206895 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-04 01:12:29.206899 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-04 01:12:29.206903 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-04 01:12:29.206907 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-04 01:12:29.206911 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:12:29.206915 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-04 01:12:29.206918 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-04 01:12:29.206922 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-04 01:12:29.206926 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-04 01:12:29.206930 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-04 01:12:29.206934 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-04 01:12:29.206938 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:12:29.206941 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-04 01:12:29.206945 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-04 01:12:29.206970 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-04 01:12:29.206980 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-04 01:12:29.206986 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-04 01:12:29.206992 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-04 01:12:29.206999 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:12:29.207005 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-04 01:12:29.207011 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-04 01:12:29.207017 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-04 01:12:29.207064 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-04 01:12:29.207088 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-04 01:12:29.207095 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-04 01:12:29.207101 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.207108 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-04 01:12:29.207113 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-04 01:12:29.207117 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-04 01:12:29.207121 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-04 01:12:29.207125 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-04 01:12:29.207128 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-04 01:12:29.207132 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.207136 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-04 01:12:29.207140 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-04 01:12:29.207144 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-04 01:12:29.207148 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-04 01:12:29.207151 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-04 01:12:29.207155 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-04 01:12:29.207159 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.207163 | orchestrator | 2026-02-04 01:12:29.207167 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-04 01:12:29.207171 | orchestrator | 2026-02-04 01:12:29.207174 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-04 01:12:29.207178 | orchestrator | Wednesday 04 February 2026 01:12:25 +0000 (0:00:01.242) 0:07:51.901 **** 2026-02-04 01:12:29.207182 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-04 01:12:29.207186 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-04 01:12:29.207190 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.207193 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-04 01:12:29.207197 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-04 01:12:29.207201 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.207205 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-04 01:12:29.207209 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-04 01:12:29.207212 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.207216 | orchestrator | 2026-02-04 01:12:29.207220 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-04 01:12:29.207224 | orchestrator | 2026-02-04 01:12:29.207227 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-04 01:12:29.207234 | orchestrator | Wednesday 04 February 2026 01:12:25 +0000 (0:00:00.675) 0:07:52.577 **** 2026-02-04 01:12:29.207238 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.207242 | orchestrator | 2026-02-04 01:12:29.207246 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-04 01:12:29.207254 | orchestrator | 2026-02-04 01:12:29.207261 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-04 01:12:29.207265 | orchestrator | Wednesday 04 February 2026 01:12:26 +0000 (0:00:00.630) 0:07:53.207 **** 2026-02-04 01:12:29.207269 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:12:29.207273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:12:29.207277 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:12:29.207283 | orchestrator | 2026-02-04 01:12:29.207290 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:12:29.207296 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:12:29.207303 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-04 01:12:29.207309 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-04 01:12:29.207315 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-04 01:12:29.207321 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-04 01:12:29.207327 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-04 01:12:29.207334 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 01:12:29.207340 | orchestrator | 2026-02-04 01:12:29.207346 | orchestrator | 2026-02-04 01:12:29.207350 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:12:29.207354 | orchestrator | Wednesday 04 February 2026 01:12:26 +0000 (0:00:00.434) 0:07:53.642 **** 2026-02-04 01:12:29.207358 | orchestrator | =============================================================================== 2026-02-04 01:12:29.207362 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.29s 2026-02-04 01:12:29.207366 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 29.54s 2026-02-04 01:12:29.207369 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 29.24s 2026-02-04 01:12:29.207373 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.23s 2026-02-04 01:12:29.207377 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.32s 2026-02-04 01:12:29.207381 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 17.05s 2026-02-04 01:12:29.207385 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.82s 2026-02-04 01:12:29.207388 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 15.20s 2026-02-04 01:12:29.207392 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.08s 2026-02-04 01:12:29.207396 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.52s 2026-02-04 01:12:29.207400 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.31s 2026-02-04 01:12:29.207404 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 13.30s 2026-02-04 01:12:29.207407 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.47s 2026-02-04 01:12:29.207411 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.26s 2026-02-04 01:12:29.207415 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.68s 2026-02-04 01:12:29.207419 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.50s 2026-02-04 01:12:29.207423 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.20s 2026-02-04 01:12:29.207430 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.73s 2026-02-04 01:12:29.207434 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.16s 2026-02-04 01:12:29.207437 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.70s 2026-02-04 01:12:29.207441 | orchestrator | 2026-02-04 01:12:29 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:29.207447 | orchestrator | 2026-02-04 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:32.252771 | orchestrator | 2026-02-04 01:12:32 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:32.252849 | orchestrator | 2026-02-04 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:35.290996 | orchestrator | 2026-02-04 01:12:35 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:35.291077 | orchestrator | 2026-02-04 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:38.332156 | orchestrator | 2026-02-04 01:12:38 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:38.332296 | orchestrator | 2026-02-04 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:41.373286 | orchestrator | 2026-02-04 01:12:41 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:41.373406 | orchestrator | 2026-02-04 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:44.413709 | orchestrator | 2026-02-04 01:12:44 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:44.413818 | orchestrator | 2026-02-04 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:47.456321 | orchestrator | 2026-02-04 01:12:47 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:47.456509 | orchestrator | 2026-02-04 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:50.492068 | orchestrator | 2026-02-04 01:12:50 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:50.492121 | orchestrator | 2026-02-04 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:53.530771 | orchestrator | 2026-02-04 01:12:53 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:53.530829 | orchestrator | 2026-02-04 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:56.569704 | orchestrator | 2026-02-04 01:12:56 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:56.569762 | orchestrator | 2026-02-04 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:12:59.606695 | orchestrator | 2026-02-04 01:12:59 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:12:59.606746 | orchestrator | 2026-02-04 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:02.649319 | orchestrator | 2026-02-04 01:13:02 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:02.649825 | orchestrator | 2026-02-04 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:05.736760 | orchestrator | 2026-02-04 01:13:05 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:05.736811 | orchestrator | 2026-02-04 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:08.774700 | orchestrator | 2026-02-04 01:13:08 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:08.774785 | orchestrator | 2026-02-04 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:11.820784 | orchestrator | 2026-02-04 01:13:11 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:11.820835 | orchestrator | 2026-02-04 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:14.860820 | orchestrator | 2026-02-04 01:13:14 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:14.860885 | orchestrator | 2026-02-04 01:13:14 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:17.907465 | orchestrator | 2026-02-04 01:13:17 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:17.907539 | orchestrator | 2026-02-04 01:13:17 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:20.953845 | orchestrator | 2026-02-04 01:13:20 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:20.953917 | orchestrator | 2026-02-04 01:13:20 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:23.998677 | orchestrator | 2026-02-04 01:13:23 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:23.998761 | orchestrator | 2026-02-04 01:13:23 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:27.039211 | orchestrator | 2026-02-04 01:13:27 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state STARTED 2026-02-04 01:13:27.039278 | orchestrator | 2026-02-04 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-02-04 01:13:30.081702 | orchestrator | 2026-02-04 01:13:30 | INFO  | Task 56fb1eba-9d9e-4a43-bf39-77fc44ca6798 is in state SUCCESS 2026-02-04 01:13:30.083327 | orchestrator | 2026-02-04 01:13:30.083371 | orchestrator | 2026-02-04 01:13:30.083380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:13:30.083387 | orchestrator | 2026-02-04 01:13:30.083392 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 01:13:30.083397 | orchestrator | Wednesday 04 February 2026 01:08:42 +0000 (0:00:00.269) 0:00:00.269 **** 2026-02-04 01:13:30.083403 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.083453 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:30.083459 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:30.083462 | orchestrator | 2026-02-04 01:13:30.083466 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 01:13:30.083469 | orchestrator | Wednesday 04 February 2026 01:08:43 +0000 (0:00:00.294) 0:00:00.564 **** 2026-02-04 01:13:30.083472 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-04 01:13:30.083476 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-04 01:13:30.083479 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-04 01:13:30.083482 | orchestrator | 2026-02-04 01:13:30.083486 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-04 01:13:30.083517 | orchestrator | 2026-02-04 01:13:30.083522 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:13:30.083525 | orchestrator | Wednesday 04 February 2026 01:08:43 +0000 (0:00:00.425) 0:00:00.989 **** 2026-02-04 01:13:30.083529 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:30.083535 | orchestrator | 2026-02-04 01:13:30.083539 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-04 01:13:30.083547 | orchestrator | Wednesday 04 February 2026 01:08:43 +0000 (0:00:00.509) 0:00:01.499 **** 2026-02-04 01:13:30.083554 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-04 01:13:30.083559 | orchestrator | 2026-02-04 01:13:30.083564 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-04 01:13:30.083594 | orchestrator | Wednesday 04 February 2026 01:08:47 +0000 (0:00:03.538) 0:00:05.038 **** 2026-02-04 01:13:30.083600 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-04 01:13:30.083606 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-04 01:13:30.083611 | orchestrator | 2026-02-04 01:13:30.083616 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-04 01:13:30.083621 | orchestrator | Wednesday 04 February 2026 01:08:53 +0000 (0:00:06.352) 0:00:11.390 **** 2026-02-04 01:13:30.083627 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 01:13:30.083633 | orchestrator | 2026-02-04 01:13:30.083638 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-04 01:13:30.083643 | orchestrator | Wednesday 04 February 2026 01:08:56 +0000 (0:00:03.121) 0:00:14.512 **** 2026-02-04 01:13:30.083649 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 01:13:30.083654 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-04 01:13:30.083660 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-04 01:13:30.083665 | orchestrator | 2026-02-04 01:13:30.083670 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-04 01:13:30.083676 | orchestrator | Wednesday 04 February 2026 01:09:05 +0000 (0:00:08.072) 0:00:22.584 **** 2026-02-04 01:13:30.083681 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 01:13:30.083686 | orchestrator | 2026-02-04 01:13:30.083692 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-04 01:13:30.083697 | orchestrator | Wednesday 04 February 2026 01:09:08 +0000 (0:00:03.495) 0:00:26.079 **** 2026-02-04 01:13:30.083702 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-04 01:13:30.083707 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-04 01:13:30.083713 | orchestrator | 2026-02-04 01:13:30.083717 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-04 01:13:30.083722 | orchestrator | Wednesday 04 February 2026 01:09:16 +0000 (0:00:07.499) 0:00:33.579 **** 2026-02-04 01:13:30.083730 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-04 01:13:30.083735 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-04 01:13:30.083740 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-04 01:13:30.083745 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-04 01:13:30.083750 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-04 01:13:30.083755 | orchestrator | 2026-02-04 01:13:30.083760 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:13:30.083765 | orchestrator | Wednesday 04 February 2026 01:09:32 +0000 (0:00:15.995) 0:00:49.575 **** 2026-02-04 01:13:30.083770 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:30.083936 | orchestrator | 2026-02-04 01:13:30.083942 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-04 01:13:30.083947 | orchestrator | Wednesday 04 February 2026 01:09:32 +0000 (0:00:00.735) 0:00:50.311 **** 2026-02-04 01:13:30.083952 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.083957 | orchestrator | 2026-02-04 01:13:30.083962 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-04 01:13:30.083988 | orchestrator | Wednesday 04 February 2026 01:09:37 +0000 (0:00:05.013) 0:00:55.324 **** 2026-02-04 01:13:30.083994 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084042 | orchestrator | 2026-02-04 01:13:30.084049 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-04 01:13:30.084063 | orchestrator | Wednesday 04 February 2026 01:09:42 +0000 (0:00:04.711) 0:01:00.036 **** 2026-02-04 01:13:30.084069 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084082 | orchestrator | 2026-02-04 01:13:30.084087 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-04 01:13:30.084092 | orchestrator | Wednesday 04 February 2026 01:09:45 +0000 (0:00:03.429) 0:01:03.465 **** 2026-02-04 01:13:30.084097 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-04 01:13:30.084108 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-04 01:13:30.084113 | orchestrator | 2026-02-04 01:13:30.084118 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-04 01:13:30.084123 | orchestrator | Wednesday 04 February 2026 01:09:55 +0000 (0:00:10.049) 0:01:13.515 **** 2026-02-04 01:13:30.084128 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-04 01:13:30.084133 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-04 01:13:30.084140 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-04 01:13:30.084145 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-04 01:13:30.084151 | orchestrator | 2026-02-04 01:13:30.084157 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-04 01:13:30.084162 | orchestrator | Wednesday 04 February 2026 01:10:13 +0000 (0:00:17.207) 0:01:30.722 **** 2026-02-04 01:13:30.084167 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084173 | orchestrator | 2026-02-04 01:13:30.084178 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-04 01:13:30.084183 | orchestrator | Wednesday 04 February 2026 01:10:17 +0000 (0:00:04.685) 0:01:35.408 **** 2026-02-04 01:13:30.084188 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084193 | orchestrator | 2026-02-04 01:13:30.084199 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-04 01:13:30.084204 | orchestrator | Wednesday 04 February 2026 01:10:23 +0000 (0:00:05.642) 0:01:41.050 **** 2026-02-04 01:13:30.084209 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.084214 | orchestrator | 2026-02-04 01:13:30.084219 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-04 01:13:30.084225 | orchestrator | Wednesday 04 February 2026 01:10:23 +0000 (0:00:00.199) 0:01:41.249 **** 2026-02-04 01:13:30.084230 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084235 | orchestrator | 2026-02-04 01:13:30.084240 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:13:30.084246 | orchestrator | Wednesday 04 February 2026 01:10:27 +0000 (0:00:03.926) 0:01:45.176 **** 2026-02-04 01:13:30.084251 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:30.084256 | orchestrator | 2026-02-04 01:13:30.084261 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-04 01:13:30.084266 | orchestrator | Wednesday 04 February 2026 01:10:28 +0000 (0:00:00.951) 0:01:46.127 **** 2026-02-04 01:13:30.084271 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084276 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084281 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084286 | orchestrator | 2026-02-04 01:13:30.084291 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-04 01:13:30.084296 | orchestrator | Wednesday 04 February 2026 01:10:33 +0000 (0:00:04.913) 0:01:51.041 **** 2026-02-04 01:13:30.084302 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084307 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084312 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084317 | orchestrator | 2026-02-04 01:13:30.084323 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-04 01:13:30.084333 | orchestrator | Wednesday 04 February 2026 01:10:38 +0000 (0:00:05.127) 0:01:56.168 **** 2026-02-04 01:13:30.084338 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084343 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084348 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084353 | orchestrator | 2026-02-04 01:13:30.084444 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-04 01:13:30.084452 | orchestrator | Wednesday 04 February 2026 01:10:39 +0000 (0:00:00.743) 0:01:56.912 **** 2026-02-04 01:13:30.084574 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084581 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:30.084587 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:30.084592 | orchestrator | 2026-02-04 01:13:30.084597 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-04 01:13:30.084603 | orchestrator | Wednesday 04 February 2026 01:10:41 +0000 (0:00:01.816) 0:01:58.729 **** 2026-02-04 01:13:30.084608 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084613 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084618 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084623 | orchestrator | 2026-02-04 01:13:30.084628 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-04 01:13:30.084634 | orchestrator | Wednesday 04 February 2026 01:10:42 +0000 (0:00:01.174) 0:01:59.904 **** 2026-02-04 01:13:30.084639 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084644 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084649 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084654 | orchestrator | 2026-02-04 01:13:30.084659 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-04 01:13:30.084664 | orchestrator | Wednesday 04 February 2026 01:10:43 +0000 (0:00:01.073) 0:02:00.978 **** 2026-02-04 01:13:30.084670 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084675 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084680 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084685 | orchestrator | 2026-02-04 01:13:30.084707 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-04 01:13:30.084713 | orchestrator | Wednesday 04 February 2026 01:10:45 +0000 (0:00:01.866) 0:02:02.844 **** 2026-02-04 01:13:30.084718 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.084723 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.084728 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.084733 | orchestrator | 2026-02-04 01:13:30.084743 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-04 01:13:30.084748 | orchestrator | Wednesday 04 February 2026 01:10:46 +0000 (0:00:01.586) 0:02:04.431 **** 2026-02-04 01:13:30.084753 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084758 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:30.084764 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:30.084769 | orchestrator | 2026-02-04 01:13:30.084774 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-04 01:13:30.084779 | orchestrator | Wednesday 04 February 2026 01:10:47 +0000 (0:00:00.611) 0:02:05.042 **** 2026-02-04 01:13:30.084784 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:30.084790 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:30.084795 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084800 | orchestrator | 2026-02-04 01:13:30.084806 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:13:30.084811 | orchestrator | Wednesday 04 February 2026 01:10:50 +0000 (0:00:02.543) 0:02:07.585 **** 2026-02-04 01:13:30.084816 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:30.084822 | orchestrator | 2026-02-04 01:13:30.084827 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-04 01:13:30.084832 | orchestrator | Wednesday 04 February 2026 01:10:50 +0000 (0:00:00.580) 0:02:08.165 **** 2026-02-04 01:13:30.084837 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084848 | orchestrator | 2026-02-04 01:13:30.084853 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-04 01:13:30.084858 | orchestrator | Wednesday 04 February 2026 01:10:54 +0000 (0:00:04.072) 0:02:12.238 **** 2026-02-04 01:13:30.084864 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084869 | orchestrator | 2026-02-04 01:13:30.084874 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-04 01:13:30.084879 | orchestrator | Wednesday 04 February 2026 01:10:58 +0000 (0:00:03.313) 0:02:15.552 **** 2026-02-04 01:13:30.084884 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-04 01:13:30.084890 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-04 01:13:30.084895 | orchestrator | 2026-02-04 01:13:30.084900 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-04 01:13:30.084905 | orchestrator | Wednesday 04 February 2026 01:11:05 +0000 (0:00:07.379) 0:02:22.932 **** 2026-02-04 01:13:30.084910 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084915 | orchestrator | 2026-02-04 01:13:30.084921 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-04 01:13:30.084926 | orchestrator | Wednesday 04 February 2026 01:11:09 +0000 (0:00:03.749) 0:02:26.681 **** 2026-02-04 01:13:30.084931 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:13:30.084936 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:13:30.084941 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:13:30.084947 | orchestrator | 2026-02-04 01:13:30.084952 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-04 01:13:30.084957 | orchestrator | Wednesday 04 February 2026 01:11:09 +0000 (0:00:00.267) 0:02:26.948 **** 2026-02-04 01:13:30.084963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.084987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.084996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085125 | orchestrator | 2026-02-04 01:13:30.085131 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-04 01:13:30.085139 | orchestrator | Wednesday 04 February 2026 01:11:11 +0000 (0:00:02.400) 0:02:29.349 **** 2026-02-04 01:13:30.085145 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.085150 | orchestrator | 2026-02-04 01:13:30.085157 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-04 01:13:30.085163 | orchestrator | Wednesday 04 February 2026 01:11:11 +0000 (0:00:00.125) 0:02:29.475 **** 2026-02-04 01:13:30.085168 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.085174 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:30.085179 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:30.085184 | orchestrator | 2026-02-04 01:13:30.085189 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-04 01:13:30.085195 | orchestrator | Wednesday 04 February 2026 01:11:12 +0000 (0:00:00.370) 0:02:29.845 **** 2026-02-04 01:13:30.085200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085232 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.085259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085290 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:30.085309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085346 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:30.085352 | orchestrator | 2026-02-04 01:13:30.085357 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:13:30.085363 | orchestrator | Wednesday 04 February 2026 01:11:12 +0000 (0:00:00.604) 0:02:30.450 **** 2026-02-04 01:13:30.085368 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:13:30.085374 | orchestrator | 2026-02-04 01:13:30.085380 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-04 01:13:30.085385 | orchestrator | Wednesday 04 February 2026 01:11:13 +0000 (0:00:00.507) 0:02:30.957 **** 2026-02-04 01:13:30.085391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085498 | orchestrator | 2026-02-04 01:13:30.085503 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-04 01:13:30.085509 | orchestrator | Wednesday 04 February 2026 01:11:18 +0000 (0:00:05.033) 0:02:35.990 **** 2026-02-04 01:13:30.085515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085551 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.085560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085594 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:30.085605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085636 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:30.085641 | orchestrator | 2026-02-04 01:13:30.085647 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-04 01:13:30.085652 | orchestrator | Wednesday 04 February 2026 01:11:20 +0000 (0:00:01.694) 0:02:37.684 **** 2026-02-04 01:13:30.085658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085696 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.085701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085734 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:30.085739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 01:13:30.085749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 01:13:30.085754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 01:13:30.085771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 01:13:30.085777 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:30.085782 | orchestrator | 2026-02-04 01:13:30.085788 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-04 01:13:30.085793 | orchestrator | Wednesday 04 February 2026 01:11:22 +0000 (0:00:01.903) 0:02:39.588 **** 2026-02-04 01:13:30.085799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.085901 | orchestrator | 2026-02-04 01:13:30.085906 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-04 01:13:30.085912 | orchestrator | Wednesday 04 February 2026 01:11:27 +0000 (0:00:04.984) 0:02:44.572 **** 2026-02-04 01:13:30.085917 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 01:13:30.085923 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 01:13:30.085928 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 01:13:30.085933 | orchestrator | 2026-02-04 01:13:30.085939 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-04 01:13:30.085944 | orchestrator | Wednesday 04 February 2026 01:11:28 +0000 (0:00:01.714) 0:02:46.286 **** 2026-02-04 01:13:30.085955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.085976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.085998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086104 | orchestrator | 2026-02-04 01:13:30.086110 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-04 01:13:30.086115 | orchestrator | Wednesday 04 February 2026 01:11:49 +0000 (0:00:20.407) 0:03:06.694 **** 2026-02-04 01:13:30.086121 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086126 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.086132 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.086137 | orchestrator | 2026-02-04 01:13:30.086142 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-04 01:13:30.086148 | orchestrator | Wednesday 04 February 2026 01:11:50 +0000 (0:00:01.601) 0:03:08.297 **** 2026-02-04 01:13:30.086153 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086159 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086164 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086169 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086174 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086180 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086185 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086190 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086195 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086201 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086206 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086212 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086217 | orchestrator | 2026-02-04 01:13:30.086223 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-04 01:13:30.086229 | orchestrator | Wednesday 04 February 2026 01:11:56 +0000 (0:00:05.502) 0:03:13.799 **** 2026-02-04 01:13:30.086234 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086240 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086245 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086251 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086256 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086262 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086267 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086273 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086278 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086284 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086289 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086295 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086306 | orchestrator | 2026-02-04 01:13:30.086312 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-04 01:13:30.086317 | orchestrator | Wednesday 04 February 2026 01:12:01 +0000 (0:00:04.965) 0:03:18.764 **** 2026-02-04 01:13:30.086323 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086328 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086333 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 01:13:30.086339 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086344 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086349 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 01:13:30.086355 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086360 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086369 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 01:13:30.086375 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086380 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086386 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 01:13:30.086391 | orchestrator | 2026-02-04 01:13:30.086400 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-04 01:13:30.086405 | orchestrator | Wednesday 04 February 2026 01:12:06 +0000 (0:00:04.947) 0:03:23.711 **** 2026-02-04 01:13:30.086412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.086417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.086423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 01:13:30.086431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.086440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.086449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 01:13:30.086454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 01:13:30.086514 | orchestrator | 2026-02-04 01:13:30.086520 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 01:13:30.086529 | orchestrator | Wednesday 04 February 2026 01:12:09 +0000 (0:00:03.576) 0:03:27.287 **** 2026-02-04 01:13:30.086534 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:13:30.086540 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:13:30.086546 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:13:30.086551 | orchestrator | 2026-02-04 01:13:30.086557 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-04 01:13:30.086562 | orchestrator | Wednesday 04 February 2026 01:12:10 +0000 (0:00:00.288) 0:03:27.576 **** 2026-02-04 01:13:30.086568 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086573 | orchestrator | 2026-02-04 01:13:30.086579 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-04 01:13:30.086584 | orchestrator | Wednesday 04 February 2026 01:12:11 +0000 (0:00:01.889) 0:03:29.465 **** 2026-02-04 01:13:30.086590 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086595 | orchestrator | 2026-02-04 01:13:30.086600 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-04 01:13:30.086606 | orchestrator | Wednesday 04 February 2026 01:12:14 +0000 (0:00:02.085) 0:03:31.550 **** 2026-02-04 01:13:30.086611 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086617 | orchestrator | 2026-02-04 01:13:30.086623 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-04 01:13:30.086629 | orchestrator | Wednesday 04 February 2026 01:12:16 +0000 (0:00:02.304) 0:03:33.855 **** 2026-02-04 01:13:30.086634 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086639 | orchestrator | 2026-02-04 01:13:30.086645 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-04 01:13:30.086650 | orchestrator | Wednesday 04 February 2026 01:12:19 +0000 (0:00:02.842) 0:03:36.697 **** 2026-02-04 01:13:30.086656 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086661 | orchestrator | 2026-02-04 01:13:30.086666 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 01:13:30.086672 | orchestrator | Wednesday 04 February 2026 01:12:41 +0000 (0:00:22.494) 0:03:59.192 **** 2026-02-04 01:13:30.086678 | orchestrator | 2026-02-04 01:13:30.086683 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 01:13:30.086689 | orchestrator | Wednesday 04 February 2026 01:12:41 +0000 (0:00:00.070) 0:03:59.262 **** 2026-02-04 01:13:30.086694 | orchestrator | 2026-02-04 01:13:30.086700 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 01:13:30.086705 | orchestrator | Wednesday 04 February 2026 01:12:41 +0000 (0:00:00.086) 0:03:59.349 **** 2026-02-04 01:13:30.086711 | orchestrator | 2026-02-04 01:13:30.086716 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-04 01:13:30.086725 | orchestrator | Wednesday 04 February 2026 01:12:41 +0000 (0:00:00.092) 0:03:59.441 **** 2026-02-04 01:13:30.086731 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086737 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.086742 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.086748 | orchestrator | 2026-02-04 01:13:30.086754 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-04 01:13:30.086762 | orchestrator | Wednesday 04 February 2026 01:12:56 +0000 (0:00:14.650) 0:04:14.091 **** 2026-02-04 01:13:30.086767 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.086773 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086778 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.086784 | orchestrator | 2026-02-04 01:13:30.086790 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-04 01:13:30.086795 | orchestrator | Wednesday 04 February 2026 01:13:07 +0000 (0:00:10.538) 0:04:24.630 **** 2026-02-04 01:13:30.086801 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086806 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.086812 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.086818 | orchestrator | 2026-02-04 01:13:30.086823 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-04 01:13:30.086832 | orchestrator | Wednesday 04 February 2026 01:13:12 +0000 (0:00:05.250) 0:04:29.881 **** 2026-02-04 01:13:30.086837 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086843 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.086848 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.086855 | orchestrator | 2026-02-04 01:13:30.086860 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-04 01:13:30.086865 | orchestrator | Wednesday 04 February 2026 01:13:22 +0000 (0:00:10.194) 0:04:40.076 **** 2026-02-04 01:13:30.086871 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:13:30.086877 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:13:30.086882 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:13:30.086887 | orchestrator | 2026-02-04 01:13:30.086893 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:13:30.086899 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:13:30.086905 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:13:30.086910 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:13:30.086916 | orchestrator | 2026-02-04 01:13:30.086922 | orchestrator | 2026-02-04 01:13:30.086927 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:13:30.086933 | orchestrator | Wednesday 04 February 2026 01:13:27 +0000 (0:00:05.181) 0:04:45.258 **** 2026-02-04 01:13:30.086938 | orchestrator | =============================================================================== 2026-02-04 01:13:30.086944 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.49s 2026-02-04 01:13:30.086949 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.41s 2026-02-04 01:13:30.086955 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.21s 2026-02-04 01:13:30.086960 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.00s 2026-02-04 01:13:30.086966 | orchestrator | octavia : Restart octavia-api container -------------------------------- 14.65s 2026-02-04 01:13:30.086971 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.54s 2026-02-04 01:13:30.086976 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.20s 2026-02-04 01:13:30.086982 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.05s 2026-02-04 01:13:30.086987 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.07s 2026-02-04 01:13:30.086993 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.50s 2026-02-04 01:13:30.086999 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.38s 2026-02-04 01:13:30.087032 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.35s 2026-02-04 01:13:30.087038 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.64s 2026-02-04 01:13:30.087041 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.50s 2026-02-04 01:13:30.087045 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 5.25s 2026-02-04 01:13:30.087048 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.18s 2026-02-04 01:13:30.087051 | orchestrator | octavia : Update Octavia health manager port host_id -------------------- 5.13s 2026-02-04 01:13:30.087054 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.03s 2026-02-04 01:13:30.087057 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.01s 2026-02-04 01:13:30.087063 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.98s 2026-02-04 01:13:30.087068 | orchestrator | 2026-02-04 01:13:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:33.126671 | orchestrator | 2026-02-04 01:13:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:36.159272 | orchestrator | 2026-02-04 01:13:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:39.201615 | orchestrator | 2026-02-04 01:13:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:42.241453 | orchestrator | 2026-02-04 01:13:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:45.274782 | orchestrator | 2026-02-04 01:13:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:48.321100 | orchestrator | 2026-02-04 01:13:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:51.358732 | orchestrator | 2026-02-04 01:13:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:54.396042 | orchestrator | 2026-02-04 01:13:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:13:57.438548 | orchestrator | 2026-02-04 01:13:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:00.470607 | orchestrator | 2026-02-04 01:14:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:03.516575 | orchestrator | 2026-02-04 01:14:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:06.554851 | orchestrator | 2026-02-04 01:14:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:09.592656 | orchestrator | 2026-02-04 01:14:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:12.638804 | orchestrator | 2026-02-04 01:14:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:15.685587 | orchestrator | 2026-02-04 01:14:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:18.733831 | orchestrator | 2026-02-04 01:14:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:21.774292 | orchestrator | 2026-02-04 01:14:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:24.813765 | orchestrator | 2026-02-04 01:14:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:27.853803 | orchestrator | 2026-02-04 01:14:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-04 01:14:30.891731 | orchestrator | 2026-02-04 01:14:31.170889 | orchestrator | 2026-02-04 01:14:31.177571 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Feb 4 01:14:31 UTC 2026 2026-02-04 01:14:31.178846 | orchestrator | 2026-02-04 01:14:31.591265 | orchestrator | ok: Runtime: 0:35:01.976618 2026-02-04 01:14:31.845507 | 2026-02-04 01:14:31.845656 | TASK [Bootstrap services] 2026-02-04 01:14:32.627770 | orchestrator | 2026-02-04 01:14:32.627875 | orchestrator | # BOOTSTRAP 2026-02-04 01:14:32.627886 | orchestrator | 2026-02-04 01:14:32.627893 | orchestrator | + set -e 2026-02-04 01:14:32.627900 | orchestrator | + echo 2026-02-04 01:14:32.627909 | orchestrator | + echo '# BOOTSTRAP' 2026-02-04 01:14:32.627918 | orchestrator | + echo 2026-02-04 01:14:32.627940 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-04 01:14:32.636570 | orchestrator | + set -e 2026-02-04 01:14:32.636625 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-04 01:14:36.631263 | orchestrator | 2026-02-04 01:14:36 | INFO  | It takes a moment until task 73c19552-b3cf-4b5d-b2da-d7061a8b5157 (flavor-manager) has been started and output is visible here. 2026-02-04 01:14:43.927282 | orchestrator | 2026-02-04 01:14:39 | INFO  | Flavor SCS-1L-1 created 2026-02-04 01:14:43.927349 | orchestrator | 2026-02-04 01:14:39 | INFO  | Flavor SCS-1L-1-5 created 2026-02-04 01:14:43.927356 | orchestrator | 2026-02-04 01:14:40 | INFO  | Flavor SCS-1V-2 created 2026-02-04 01:14:43.927360 | orchestrator | 2026-02-04 01:14:40 | INFO  | Flavor SCS-1V-2-5 created 2026-02-04 01:14:43.927363 | orchestrator | 2026-02-04 01:14:40 | INFO  | Flavor SCS-1V-4 created 2026-02-04 01:14:43.927367 | orchestrator | 2026-02-04 01:14:40 | INFO  | Flavor SCS-1V-4-10 created 2026-02-04 01:14:43.927370 | orchestrator | 2026-02-04 01:14:40 | INFO  | Flavor SCS-1V-8 created 2026-02-04 01:14:43.927374 | orchestrator | 2026-02-04 01:14:40 | INFO  | Flavor SCS-1V-8-20 created 2026-02-04 01:14:43.927381 | orchestrator | 2026-02-04 01:14:41 | INFO  | Flavor SCS-2V-4 created 2026-02-04 01:14:43.927385 | orchestrator | 2026-02-04 01:14:41 | INFO  | Flavor SCS-2V-4-10 created 2026-02-04 01:14:43.927388 | orchestrator | 2026-02-04 01:14:41 | INFO  | Flavor SCS-2V-8 created 2026-02-04 01:14:43.927391 | orchestrator | 2026-02-04 01:14:41 | INFO  | Flavor SCS-2V-8-20 created 2026-02-04 01:14:43.927395 | orchestrator | 2026-02-04 01:14:41 | INFO  | Flavor SCS-2V-16 created 2026-02-04 01:14:43.927398 | orchestrator | 2026-02-04 01:14:41 | INFO  | Flavor SCS-2V-16-50 created 2026-02-04 01:14:43.927401 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-4V-8 created 2026-02-04 01:14:43.927404 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-4V-8-20 created 2026-02-04 01:14:43.927408 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-4V-16 created 2026-02-04 01:14:43.927411 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-4V-16-50 created 2026-02-04 01:14:43.927414 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-4V-32 created 2026-02-04 01:14:43.927417 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-4V-32-100 created 2026-02-04 01:14:43.927420 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-8V-16 created 2026-02-04 01:14:43.927424 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-8V-16-50 created 2026-02-04 01:14:43.927427 | orchestrator | 2026-02-04 01:14:42 | INFO  | Flavor SCS-8V-32 created 2026-02-04 01:14:43.927431 | orchestrator | 2026-02-04 01:14:43 | INFO  | Flavor SCS-8V-32-100 created 2026-02-04 01:14:43.927434 | orchestrator | 2026-02-04 01:14:43 | INFO  | Flavor SCS-16V-32 created 2026-02-04 01:14:43.927437 | orchestrator | 2026-02-04 01:14:43 | INFO  | Flavor SCS-16V-32-100 created 2026-02-04 01:14:43.927440 | orchestrator | 2026-02-04 01:14:43 | INFO  | Flavor SCS-2V-4-20s created 2026-02-04 01:14:43.927443 | orchestrator | 2026-02-04 01:14:43 | INFO  | Flavor SCS-4V-8-50s created 2026-02-04 01:14:43.927447 | orchestrator | 2026-02-04 01:14:43 | INFO  | Flavor SCS-8V-32-100s created 2026-02-04 01:14:46.289790 | orchestrator | 2026-02-04 01:14:46 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-04 01:14:56.366893 | orchestrator | 2026-02-04 01:14:56 | INFO  | Task 2d404f7e-52a7-4f23-8ec5-cb9e1c7bc871 (bootstrap-basic) was prepared for execution. 2026-02-04 01:14:56.366956 | orchestrator | 2026-02-04 01:14:56 | INFO  | It takes a moment until task 2d404f7e-52a7-4f23-8ec5-cb9e1c7bc871 (bootstrap-basic) has been started and output is visible here. 2026-02-04 01:15:40.701706 | orchestrator | 2026-02-04 01:15:40.701796 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-04 01:15:40.702208 | orchestrator | 2026-02-04 01:15:40.702220 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:15:40.702230 | orchestrator | Wednesday 04 February 2026 01:15:00 +0000 (0:00:00.068) 0:00:00.068 **** 2026-02-04 01:15:40.702245 | orchestrator | ok: [localhost] 2026-02-04 01:15:40.702255 | orchestrator | 2026-02-04 01:15:40.702265 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-04 01:15:40.702273 | orchestrator | Wednesday 04 February 2026 01:15:02 +0000 (0:00:01.836) 0:00:01.904 **** 2026-02-04 01:15:40.702282 | orchestrator | ok: [localhost] 2026-02-04 01:15:40.702291 | orchestrator | 2026-02-04 01:15:40.702300 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-04 01:15:40.702309 | orchestrator | Wednesday 04 February 2026 01:15:10 +0000 (0:00:08.218) 0:00:10.123 **** 2026-02-04 01:15:40.702319 | orchestrator | changed: [localhost] 2026-02-04 01:15:40.702329 | orchestrator | 2026-02-04 01:15:40.702340 | orchestrator | TASK [Create public network] *************************************************** 2026-02-04 01:15:40.702350 | orchestrator | Wednesday 04 February 2026 01:15:17 +0000 (0:00:06.795) 0:00:16.919 **** 2026-02-04 01:15:40.702359 | orchestrator | changed: [localhost] 2026-02-04 01:15:40.702369 | orchestrator | 2026-02-04 01:15:40.702379 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-04 01:15:40.702389 | orchestrator | Wednesday 04 February 2026 01:15:22 +0000 (0:00:04.881) 0:00:21.800 **** 2026-02-04 01:15:40.702402 | orchestrator | changed: [localhost] 2026-02-04 01:15:40.702412 | orchestrator | 2026-02-04 01:15:40.702422 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-04 01:15:40.702432 | orchestrator | Wednesday 04 February 2026 01:15:29 +0000 (0:00:06.649) 0:00:28.450 **** 2026-02-04 01:15:40.702442 | orchestrator | changed: [localhost] 2026-02-04 01:15:40.702453 | orchestrator | 2026-02-04 01:15:40.702463 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-04 01:15:40.702471 | orchestrator | Wednesday 04 February 2026 01:15:33 +0000 (0:00:04.401) 0:00:32.852 **** 2026-02-04 01:15:40.702477 | orchestrator | changed: [localhost] 2026-02-04 01:15:40.702483 | orchestrator | 2026-02-04 01:15:40.702489 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-04 01:15:40.702504 | orchestrator | Wednesday 04 February 2026 01:15:37 +0000 (0:00:03.670) 0:00:36.523 **** 2026-02-04 01:15:40.702510 | orchestrator | ok: [localhost] 2026-02-04 01:15:40.702516 | orchestrator | 2026-02-04 01:15:40.702523 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:15:40.702529 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:15:40.702536 | orchestrator | 2026-02-04 01:15:40.702542 | orchestrator | 2026-02-04 01:15:40.702548 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:15:40.702554 | orchestrator | Wednesday 04 February 2026 01:15:40 +0000 (0:00:03.350) 0:00:39.873 **** 2026-02-04 01:15:40.702561 | orchestrator | =============================================================================== 2026-02-04 01:15:40.702573 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.22s 2026-02-04 01:15:40.702585 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.80s 2026-02-04 01:15:40.702594 | orchestrator | Set public network to default ------------------------------------------- 6.65s 2026-02-04 01:15:40.702603 | orchestrator | Create public network --------------------------------------------------- 4.88s 2026-02-04 01:15:40.702644 | orchestrator | Create public subnet ---------------------------------------------------- 4.40s 2026-02-04 01:15:40.702652 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.67s 2026-02-04 01:15:40.702658 | orchestrator | Create manager role ----------------------------------------------------- 3.35s 2026-02-04 01:15:40.702665 | orchestrator | Gathering Facts --------------------------------------------------------- 1.84s 2026-02-04 01:15:42.987115 | orchestrator | 2026-02-04 01:15:42 | INFO  | It takes a moment until task 031360a7-4ed9-4fc2-9188-d6b0ce053ae3 (image-manager) has been started and output is visible here. 2026-02-04 01:16:23.879250 | orchestrator | 2026-02-04 01:15:45 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-04 01:16:23.879317 | orchestrator | 2026-02-04 01:15:46 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-04 01:16:23.879350 | orchestrator | 2026-02-04 01:15:46 | INFO  | Importing image Cirros 0.6.2 2026-02-04 01:16:23.879359 | orchestrator | 2026-02-04 01:15:46 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-04 01:16:23.879367 | orchestrator | 2026-02-04 01:15:48 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:16:23.879375 | orchestrator | 2026-02-04 01:15:50 | INFO  | Waiting for import to complete... 2026-02-04 01:16:23.879382 | orchestrator | 2026-02-04 01:16:00 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-04 01:16:23.879389 | orchestrator | 2026-02-04 01:16:00 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-04 01:16:23.879395 | orchestrator | 2026-02-04 01:16:00 | INFO  | Setting internal_version = 0.6.2 2026-02-04 01:16:23.879402 | orchestrator | 2026-02-04 01:16:00 | INFO  | Setting image_original_user = cirros 2026-02-04 01:16:23.879408 | orchestrator | 2026-02-04 01:16:00 | INFO  | Adding tag os:cirros 2026-02-04 01:16:23.879414 | orchestrator | 2026-02-04 01:16:00 | INFO  | Setting property architecture: x86_64 2026-02-04 01:16:23.879421 | orchestrator | 2026-02-04 01:16:01 | INFO  | Setting property hw_disk_bus: scsi 2026-02-04 01:16:23.879428 | orchestrator | 2026-02-04 01:16:01 | INFO  | Setting property hw_rng_model: virtio 2026-02-04 01:16:23.879434 | orchestrator | 2026-02-04 01:16:01 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-04 01:16:23.879441 | orchestrator | 2026-02-04 01:16:01 | INFO  | Setting property hw_watchdog_action: reset 2026-02-04 01:16:23.879447 | orchestrator | 2026-02-04 01:16:02 | INFO  | Setting property hypervisor_type: qemu 2026-02-04 01:16:23.879454 | orchestrator | 2026-02-04 01:16:02 | INFO  | Setting property os_distro: cirros 2026-02-04 01:16:23.879460 | orchestrator | 2026-02-04 01:16:02 | INFO  | Setting property os_purpose: minimal 2026-02-04 01:16:23.879467 | orchestrator | 2026-02-04 01:16:02 | INFO  | Setting property replace_frequency: never 2026-02-04 01:16:23.879474 | orchestrator | 2026-02-04 01:16:03 | INFO  | Setting property uuid_validity: none 2026-02-04 01:16:23.879480 | orchestrator | 2026-02-04 01:16:03 | INFO  | Setting property provided_until: none 2026-02-04 01:16:23.879486 | orchestrator | 2026-02-04 01:16:03 | INFO  | Setting property image_description: Cirros 2026-02-04 01:16:23.879493 | orchestrator | 2026-02-04 01:16:03 | INFO  | Setting property image_name: Cirros 2026-02-04 01:16:23.879499 | orchestrator | 2026-02-04 01:16:03 | INFO  | Setting property internal_version: 0.6.2 2026-02-04 01:16:23.879506 | orchestrator | 2026-02-04 01:16:04 | INFO  | Setting property image_original_user: cirros 2026-02-04 01:16:23.879530 | orchestrator | 2026-02-04 01:16:04 | INFO  | Setting property os_version: 0.6.2 2026-02-04 01:16:23.879542 | orchestrator | 2026-02-04 01:16:04 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-04 01:16:23.879550 | orchestrator | 2026-02-04 01:16:04 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-04 01:16:23.879556 | orchestrator | 2026-02-04 01:16:04 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-04 01:16:23.879563 | orchestrator | 2026-02-04 01:16:04 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-04 01:16:23.879569 | orchestrator | 2026-02-04 01:16:04 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-04 01:16:23.879575 | orchestrator | 2026-02-04 01:16:05 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-04 01:16:23.879585 | orchestrator | 2026-02-04 01:16:05 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-04 01:16:23.879592 | orchestrator | 2026-02-04 01:16:05 | INFO  | Importing image Cirros 0.6.3 2026-02-04 01:16:23.879599 | orchestrator | 2026-02-04 01:16:05 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-04 01:16:23.879606 | orchestrator | 2026-02-04 01:16:05 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:16:23.879613 | orchestrator | 2026-02-04 01:16:07 | INFO  | Waiting for import to complete... 2026-02-04 01:16:23.879633 | orchestrator | 2026-02-04 01:16:18 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-04 01:16:23.879640 | orchestrator | 2026-02-04 01:16:18 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-04 01:16:23.879647 | orchestrator | 2026-02-04 01:16:18 | INFO  | Setting internal_version = 0.6.3 2026-02-04 01:16:23.879653 | orchestrator | 2026-02-04 01:16:18 | INFO  | Setting image_original_user = cirros 2026-02-04 01:16:23.879660 | orchestrator | 2026-02-04 01:16:18 | INFO  | Adding tag os:cirros 2026-02-04 01:16:23.879667 | orchestrator | 2026-02-04 01:16:18 | INFO  | Setting property architecture: x86_64 2026-02-04 01:16:23.879674 | orchestrator | 2026-02-04 01:16:19 | INFO  | Setting property hw_disk_bus: scsi 2026-02-04 01:16:23.879680 | orchestrator | 2026-02-04 01:16:19 | INFO  | Setting property hw_rng_model: virtio 2026-02-04 01:16:23.879687 | orchestrator | 2026-02-04 01:16:19 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-04 01:16:23.879694 | orchestrator | 2026-02-04 01:16:19 | INFO  | Setting property hw_watchdog_action: reset 2026-02-04 01:16:23.879701 | orchestrator | 2026-02-04 01:16:19 | INFO  | Setting property hypervisor_type: qemu 2026-02-04 01:16:23.879708 | orchestrator | 2026-02-04 01:16:20 | INFO  | Setting property os_distro: cirros 2026-02-04 01:16:23.879713 | orchestrator | 2026-02-04 01:16:20 | INFO  | Setting property os_purpose: minimal 2026-02-04 01:16:23.879720 | orchestrator | 2026-02-04 01:16:20 | INFO  | Setting property replace_frequency: never 2026-02-04 01:16:23.879727 | orchestrator | 2026-02-04 01:16:20 | INFO  | Setting property uuid_validity: none 2026-02-04 01:16:23.879734 | orchestrator | 2026-02-04 01:16:21 | INFO  | Setting property provided_until: none 2026-02-04 01:16:23.879741 | orchestrator | 2026-02-04 01:16:21 | INFO  | Setting property image_description: Cirros 2026-02-04 01:16:23.879747 | orchestrator | 2026-02-04 01:16:21 | INFO  | Setting property image_name: Cirros 2026-02-04 01:16:23.879755 | orchestrator | 2026-02-04 01:16:21 | INFO  | Setting property internal_version: 0.6.3 2026-02-04 01:16:23.879766 | orchestrator | 2026-02-04 01:16:21 | INFO  | Setting property image_original_user: cirros 2026-02-04 01:16:23.879773 | orchestrator | 2026-02-04 01:16:22 | INFO  | Setting property os_version: 0.6.3 2026-02-04 01:16:23.879780 | orchestrator | 2026-02-04 01:16:22 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-04 01:16:23.879787 | orchestrator | 2026-02-04 01:16:22 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-04 01:16:23.879793 | orchestrator | 2026-02-04 01:16:22 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-04 01:16:23.879800 | orchestrator | 2026-02-04 01:16:22 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-04 01:16:23.879808 | orchestrator | 2026-02-04 01:16:22 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-04 01:16:24.216279 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-04 01:16:26.685450 | orchestrator | 2026-02-04 01:16:26 | INFO  | date: 2026-02-03 2026-02-04 01:16:26.685515 | orchestrator | 2026-02-04 01:16:26 | INFO  | image: octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-04 01:16:26.685534 | orchestrator | 2026-02-04 01:16:26 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-04 01:16:26.685542 | orchestrator | 2026-02-04 01:16:26 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2.CHECKSUM 2026-02-04 01:16:26.860138 | orchestrator | 2026-02-04 01:16:26 | INFO  | checksum: d880b7d1e69be114deed8e1ea6aae1bb461587b7fcd8cdc7a6dedf8496c970b1 2026-02-04 01:16:26.933025 | orchestrator | 2026-02-04 01:16:26 | INFO  | It takes a moment until task e30ea77e-786a-42b2-bf8c-d75da8e9d3b6 (image-manager) has been started and output is visible here. 2026-02-04 01:17:52.362276 | orchestrator | 2026-02-04 01:16:29 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-03' 2026-02-04 01:17:52.362350 | orchestrator | 2026-02-04 01:16:29 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2: 200 2026-02-04 01:17:52.362370 | orchestrator | 2026-02-04 01:16:29 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-03 2026-02-04 01:17:52.362376 | orchestrator | 2026-02-04 01:16:29 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260203.qcow2 2026-02-04 01:17:52.362381 | orchestrator | 2026-02-04 01:16:31 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:17:52.362386 | orchestrator | 2026-02-04 01:16:33 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362391 | orchestrator | 2026-02-04 01:16:43 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362396 | orchestrator | 2026-02-04 01:16:53 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362400 | orchestrator | 2026-02-04 01:17:03 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362406 | orchestrator | 2026-02-04 01:17:13 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362410 | orchestrator | 2026-02-04 01:17:23 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362415 | orchestrator | 2026-02-04 01:17:33 | INFO  | Waiting for import to complete... 2026-02-04 01:17:52.362419 | orchestrator | 2026-02-04 01:17:43 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:17:52.362424 | orchestrator | 2026-02-04 01:17:45 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:17:52.362440 | orchestrator | 2026-02-04 01:17:47 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:17:52.362445 | orchestrator | 2026-02-04 01:17:49 | INFO  | Waiting for image to leave queued state... 2026-02-04 01:17:52.362450 | orchestrator | 2026-02-04 01:17:51 | ERROR  | Image OpenStack Octavia Amphora 2026-02-03 seems stuck in queued state 2026-02-04 01:17:52.362455 | orchestrator | 2026-02-04 01:17:52 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-04 01:17:52.362459 | orchestrator | 2026-02-04 01:17:52 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-04 01:17:52.362464 | orchestrator | 2026-02-04 01:17:52 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-04 01:17:52.362468 | orchestrator | 2026-02-04 01:17:52 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-04 01:17:52.362473 | orchestrator | 2026-02-04 01:17:52.362477 | orchestrator | ERROR: One or more errors occurred during the execution of the program, please check the output. 2026-02-04 01:17:52.991109 | orchestrator | ERROR 2026-02-04 01:17:52.991521 | orchestrator | { 2026-02-04 01:17:52.991630 | orchestrator | "delta": "0:03:20.498957", 2026-02-04 01:17:52.991706 | orchestrator | "end": "2026-02-04 01:17:52.748546", 2026-02-04 01:17:52.991774 | orchestrator | "msg": "non-zero return code", 2026-02-04 01:17:52.991839 | orchestrator | "rc": 1, 2026-02-04 01:17:52.991901 | orchestrator | "start": "2026-02-04 01:14:32.249589" 2026-02-04 01:17:52.991963 | orchestrator | } failure 2026-02-04 01:17:53.004915 | 2026-02-04 01:17:53.005138 | PLAY RECAP 2026-02-04 01:17:53.005227 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2026-02-04 01:17:53.005262 | 2026-02-04 01:17:53.231608 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2026-02-04 01:17:53.232859 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-04 01:17:53.971399 | 2026-02-04 01:17:53.971569 | PLAY [Post output play] 2026-02-04 01:17:53.987728 | 2026-02-04 01:17:53.987862 | LOOP [stage-output : Register sources] 2026-02-04 01:17:54.057473 | 2026-02-04 01:17:54.057811 | TASK [stage-output : Check sudo] 2026-02-04 01:17:54.938563 | orchestrator | sudo: a password is required 2026-02-04 01:17:55.095403 | orchestrator | ok: Runtime: 0:00:00.013812 2026-02-04 01:17:55.111183 | 2026-02-04 01:17:55.111340 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-04 01:17:55.151783 | 2026-02-04 01:17:55.152095 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-04 01:17:55.230580 | orchestrator | ok 2026-02-04 01:17:55.239572 | 2026-02-04 01:17:55.239701 | LOOP [stage-output : Ensure target folders exist] 2026-02-04 01:17:55.753903 | orchestrator | ok: "docs" 2026-02-04 01:17:55.754281 | 2026-02-04 01:17:56.048317 | orchestrator | ok: "artifacts" 2026-02-04 01:17:56.362761 | orchestrator | ok: "logs" 2026-02-04 01:17:56.387905 | 2026-02-04 01:17:56.388103 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-04 01:17:56.427502 | 2026-02-04 01:17:56.427777 | TASK [stage-output : Make all log files readable] 2026-02-04 01:17:56.764841 | orchestrator | ok 2026-02-04 01:17:56.777404 | 2026-02-04 01:17:56.777562 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-04 01:17:56.812538 | orchestrator | skipping: Conditional result was False 2026-02-04 01:17:56.829846 | 2026-02-04 01:17:56.830065 | TASK [stage-output : Discover log files for compression] 2026-02-04 01:17:56.854396 | orchestrator | skipping: Conditional result was False 2026-02-04 01:17:56.866772 | 2026-02-04 01:17:56.866957 | LOOP [stage-output : Archive everything from logs] 2026-02-04 01:17:56.906627 | 2026-02-04 01:17:56.906800 | PLAY [Post cleanup play] 2026-02-04 01:17:56.914915 | 2026-02-04 01:17:56.915031 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 01:17:56.971318 | orchestrator | ok 2026-02-04 01:17:56.987743 | 2026-02-04 01:17:56.987857 | TASK [Set cloud fact (local deployment)] 2026-02-04 01:17:57.021842 | orchestrator | skipping: Conditional result was False 2026-02-04 01:17:57.029507 | 2026-02-04 01:17:57.029610 | TASK [Clean the cloud environment] 2026-02-04 01:17:59.856211 | orchestrator | 2026-02-04 01:17:59 - clean up servers 2026-02-04 01:18:00.647314 | orchestrator | 2026-02-04 01:18:00 - testbed-manager 2026-02-04 01:18:00.730818 | orchestrator | 2026-02-04 01:18:00 - testbed-node-1 2026-02-04 01:18:00.817180 | orchestrator | 2026-02-04 01:18:00 - testbed-node-0 2026-02-04 01:18:00.900928 | orchestrator | 2026-02-04 01:18:00 - testbed-node-4 2026-02-04 01:18:00.988692 | orchestrator | 2026-02-04 01:18:00 - testbed-node-2 2026-02-04 01:18:01.080663 | orchestrator | 2026-02-04 01:18:01 - testbed-node-3 2026-02-04 01:18:01.168194 | orchestrator | 2026-02-04 01:18:01 - testbed-node-5 2026-02-04 01:18:01.254893 | orchestrator | 2026-02-04 01:18:01 - clean up keypairs 2026-02-04 01:18:01.270630 | orchestrator | 2026-02-04 01:18:01 - testbed 2026-02-04 01:18:01.291767 | orchestrator | 2026-02-04 01:18:01 - wait for servers to be gone 2026-02-04 01:18:12.052339 | orchestrator | 2026-02-04 01:18:12 - clean up ports 2026-02-04 01:18:12.245193 | orchestrator | 2026-02-04 01:18:12 - 518c514a-6b4e-4acf-97a2-c6738208c048 2026-02-04 01:18:12.722886 | orchestrator | 2026-02-04 01:18:12 - 5b25f043-4b92-40ac-bfc9-36c50ec61aa0 2026-02-04 01:18:13.014234 | orchestrator | 2026-02-04 01:18:13 - 8aeb47ba-58d7-4d9c-8b0d-861e330caeec 2026-02-04 01:18:13.236539 | orchestrator | 2026-02-04 01:18:13 - b30e108c-827e-40a9-b63a-406b29f09ed9 2026-02-04 01:18:13.889363 | orchestrator | 2026-02-04 01:18:13 - ccdff333-34bb-4d73-8ab7-22d21a80311a 2026-02-04 01:18:14.112459 | orchestrator | 2026-02-04 01:18:14 - e86de680-6216-4ee0-aeaa-75bd7b80689a 2026-02-04 01:18:14.328910 | orchestrator | 2026-02-04 01:18:14 - fa19e430-2f19-426f-ab44-644707bc0348 2026-02-04 01:18:14.578426 | orchestrator | 2026-02-04 01:18:14 - clean up volumes 2026-02-04 01:18:14.690115 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-5-node-base 2026-02-04 01:18:14.737364 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-2-node-base 2026-02-04 01:18:14.784069 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-1-node-base 2026-02-04 01:18:14.823250 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-0-node-base 2026-02-04 01:18:14.863825 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-4-node-base 2026-02-04 01:18:14.910075 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-3-node-base 2026-02-04 01:18:14.952043 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-manager-base 2026-02-04 01:18:14.991177 | orchestrator | 2026-02-04 01:18:14 - testbed-volume-3-node-3 2026-02-04 01:18:15.035400 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-8-node-5 2026-02-04 01:18:15.079841 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-1-node-4 2026-02-04 01:18:15.127911 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-7-node-4 2026-02-04 01:18:15.169171 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-2-node-5 2026-02-04 01:18:15.208773 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-6-node-3 2026-02-04 01:18:15.249277 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-4-node-4 2026-02-04 01:18:15.287819 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-5-node-5 2026-02-04 01:18:15.325402 | orchestrator | 2026-02-04 01:18:15 - testbed-volume-0-node-3 2026-02-04 01:18:15.371072 | orchestrator | 2026-02-04 01:18:15 - disconnect routers 2026-02-04 01:18:15.497665 | orchestrator | 2026-02-04 01:18:15 - testbed 2026-02-04 01:18:16.660300 | orchestrator | 2026-02-04 01:18:16 - clean up subnets 2026-02-04 01:18:16.708997 | orchestrator | 2026-02-04 01:18:16 - subnet-testbed-management 2026-02-04 01:18:16.899826 | orchestrator | 2026-02-04 01:18:16 - clean up networks 2026-02-04 01:18:17.076048 | orchestrator | 2026-02-04 01:18:17 - net-testbed-management 2026-02-04 01:18:17.377988 | orchestrator | 2026-02-04 01:18:17 - clean up security groups 2026-02-04 01:18:17.418425 | orchestrator | 2026-02-04 01:18:17 - testbed-node 2026-02-04 01:18:17.531074 | orchestrator | 2026-02-04 01:18:17 - testbed-management 2026-02-04 01:18:17.656226 | orchestrator | 2026-02-04 01:18:17 - clean up floating ips 2026-02-04 01:18:17.691506 | orchestrator | 2026-02-04 01:18:17 - 81.163.192.40 2026-02-04 01:18:18.051659 | orchestrator | 2026-02-04 01:18:18 - clean up routers 2026-02-04 01:18:18.114530 | orchestrator | 2026-02-04 01:18:18 - testbed 2026-02-04 01:18:19.609893 | orchestrator | ok: Runtime: 0:00:21.458482 2026-02-04 01:18:19.614812 | 2026-02-04 01:18:19.615073 | PLAY RECAP 2026-02-04 01:18:19.615211 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-04 01:18:19.615276 | 2026-02-04 01:18:19.752494 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-04 01:18:19.755118 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-04 01:18:20.466410 | 2026-02-04 01:18:20.466571 | PLAY [Cleanup play] 2026-02-04 01:18:20.482699 | 2026-02-04 01:18:20.482861 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 01:18:20.551385 | orchestrator | ok 2026-02-04 01:18:20.563327 | 2026-02-04 01:18:20.563631 | TASK [Set cloud fact (local deployment)] 2026-02-04 01:18:20.609389 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:20.626085 | 2026-02-04 01:18:20.626244 | TASK [Clean the cloud environment] 2026-02-04 01:18:21.854266 | orchestrator | 2026-02-04 01:18:21 - clean up servers 2026-02-04 01:18:22.348354 | orchestrator | 2026-02-04 01:18:22 - clean up keypairs 2026-02-04 01:18:22.364789 | orchestrator | 2026-02-04 01:18:22 - wait for servers to be gone 2026-02-04 01:18:22.410472 | orchestrator | 2026-02-04 01:18:22 - clean up ports 2026-02-04 01:18:22.494641 | orchestrator | 2026-02-04 01:18:22 - clean up volumes 2026-02-04 01:18:22.566753 | orchestrator | 2026-02-04 01:18:22 - disconnect routers 2026-02-04 01:18:22.594958 | orchestrator | 2026-02-04 01:18:22 - clean up subnets 2026-02-04 01:18:22.615414 | orchestrator | 2026-02-04 01:18:22 - clean up networks 2026-02-04 01:18:22.771985 | orchestrator | 2026-02-04 01:18:22 - clean up security groups 2026-02-04 01:18:22.802563 | orchestrator | 2026-02-04 01:18:22 - clean up floating ips 2026-02-04 01:18:22.824325 | orchestrator | 2026-02-04 01:18:22 - clean up routers 2026-02-04 01:18:23.163213 | orchestrator | ok: Runtime: 0:00:01.447110 2026-02-04 01:18:23.164870 | 2026-02-04 01:18:23.164959 | PLAY RECAP 2026-02-04 01:18:23.165106 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-04 01:18:23.165151 | 2026-02-04 01:18:23.283240 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-04 01:18:23.285731 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-04 01:18:24.012908 | 2026-02-04 01:18:24.013079 | PLAY [Base post-fetch] 2026-02-04 01:18:24.028231 | 2026-02-04 01:18:24.028362 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-04 01:18:24.083944 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:24.098204 | 2026-02-04 01:18:24.098398 | TASK [fetch-output : Set log path for single node] 2026-02-04 01:18:24.155762 | orchestrator | ok 2026-02-04 01:18:24.164152 | 2026-02-04 01:18:24.164280 | LOOP [fetch-output : Ensure local output dirs] 2026-02-04 01:18:24.659696 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/work/logs" 2026-02-04 01:18:24.929767 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/work/artifacts" 2026-02-04 01:18:25.205604 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/7dc19ffc5a194c77af8a4f9675ea5084/work/docs" 2026-02-04 01:18:25.230524 | 2026-02-04 01:18:25.230688 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-04 01:18:26.162651 | orchestrator | changed: .d..t...... ./ 2026-02-04 01:18:26.165520 | orchestrator | changed: All items complete 2026-02-04 01:18:26.165708 | 2026-02-04 01:18:26.869266 | orchestrator | changed: .d..t...... ./ 2026-02-04 01:18:27.574240 | orchestrator | changed: .d..t...... ./ 2026-02-04 01:18:27.593111 | 2026-02-04 01:18:27.593232 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-04 01:18:27.631270 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:27.635349 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:27.645233 | 2026-02-04 01:18:27.645318 | PLAY RECAP 2026-02-04 01:18:27.645371 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-04 01:18:27.645398 | 2026-02-04 01:18:27.773549 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-04 01:18:27.776143 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-04 01:18:28.542136 | 2026-02-04 01:18:28.542303 | PLAY [Base post] 2026-02-04 01:18:28.557140 | 2026-02-04 01:18:28.557266 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-04 01:18:29.768018 | orchestrator | changed 2026-02-04 01:18:29.778361 | 2026-02-04 01:18:29.778485 | PLAY RECAP 2026-02-04 01:18:29.778561 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-04 01:18:29.778638 | 2026-02-04 01:18:29.899827 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-04 01:18:29.902311 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-04 01:18:30.707480 | 2026-02-04 01:18:30.707656 | PLAY [Base post-logs] 2026-02-04 01:18:30.718295 | 2026-02-04 01:18:30.718440 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-04 01:18:31.181168 | localhost | changed 2026-02-04 01:18:31.194962 | 2026-02-04 01:18:31.195160 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-04 01:18:31.233386 | localhost | ok 2026-02-04 01:18:31.240201 | 2026-02-04 01:18:31.240361 | TASK [Set zuul-log-path fact] 2026-02-04 01:18:31.259403 | localhost | ok 2026-02-04 01:18:31.273817 | 2026-02-04 01:18:31.273966 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-04 01:18:31.312116 | localhost | ok 2026-02-04 01:18:31.320075 | 2026-02-04 01:18:31.320250 | TASK [upload-logs : Create log directories] 2026-02-04 01:18:31.845217 | localhost | changed 2026-02-04 01:18:31.848097 | 2026-02-04 01:18:31.848211 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-04 01:18:32.357972 | localhost -> localhost | ok: Runtime: 0:00:00.006985 2026-02-04 01:18:32.362597 | 2026-02-04 01:18:32.362721 | TASK [upload-logs : Upload logs to log server] 2026-02-04 01:18:32.908085 | localhost | Output suppressed because no_log was given 2026-02-04 01:18:32.912607 | 2026-02-04 01:18:32.912826 | LOOP [upload-logs : Compress console log and json output] 2026-02-04 01:18:32.980602 | localhost | skipping: Conditional result was False 2026-02-04 01:18:32.988346 | localhost | skipping: Conditional result was False 2026-02-04 01:18:32.998114 | 2026-02-04 01:18:32.998493 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-04 01:18:33.055978 | localhost | skipping: Conditional result was False 2026-02-04 01:18:33.056574 | 2026-02-04 01:18:33.060277 | localhost | skipping: Conditional result was False 2026-02-04 01:18:33.073369 | 2026-02-04 01:18:33.073651 | LOOP [upload-logs : Upload console log and json output]